Dual Boot Linux (Ubuntu 22.04) and Windows 11 on Modern Systems – UEFI

Thursday, December 7th, 2023

Dual Boot Linux (Ubuntu 22.04) and Windows 11 on Modern Systems – UEFI

In order to setup a dual boot of Windows 11 and Ubuntu 22.04 on a modern system that uses UEFI, follow these steps.

  1. Install Windows 11 first leaving some unpartitioned space (at least 60GB is my recommendation) on the drive you're installing Windows on.
  2. Boot up the Ubuntu installer.
  3. During installation, you'll be presented with an Installation Type options screen.  Choose "Something else". 
  4. On the next screen, you'll see a list of drives and partitions.  On the same drive you installed Windows, create 3 new partitions. 
    1. Create an EXT4 partition for the / mount point at least 40GB in size (this is the main drive for Linux files).
    2. Create a SWAP partition at least 18GB in size.
    3. Create an EFI partition at least 500MB in size.  This is extremely important in order to get grub to install properly. 
  5. Leave the "Device for boot loader installation" set as the top level drive that Windows and Ubuntu was / is being installed on.  You should not select an individual partition here.
  6. Complete the installation process. 
  7. You might need to change the UEFI boot order in the BIOS of your system to boot Ubuntu / Linux first versus booting the Windows EFI partition.  Since you created an EFI partition for your Linux install, it should show up as a bootable option in the bios.  Set / adjust accordingly.
  8. That's it!

Ubuntu: Allow Automatic Updates for Specific Packages Only

Tuesday, June 14th, 2022

Ubuntu: Allow Automatic Updates for Specific Packages Only

If you want to allow Google products and packages to update automatically, follow this guide.

You can also add additional sources that should update automatically following the same process.

This is helpful when using Selenium, WebDriver for Chrome, and Python.  Doing this allows you to always use the most up-to-date version of all of these dependent packages.

Tested in Ubuntu 20.04

Rebuilding a Removed / Failed RAID 10 Array in CentOS / Rocky Linux

Tuesday, February 22nd, 2022

Replace Hard Drive in a RAID 10 Array and Sync the RAID 10 Array to the New Hard Drive

I had the hardest time rebuilding a RAID 10 array after replacing a hard drive.  I didn't fail the old hard drive before removing it from the array, and sometimes, this may not be an option.  What happened in my case is the data center replaced the hard drive that I had shipped to them directly from an eBay seller.  I was hoping that the RAID array would rebuild itself onto the new drive (as I have seen happen before in some circumstances).  However, that may not happen if the replacement drive still has its old RAID array or partition information present, and then, it might be difficult to actually get the RAID array to sync to the new drive. 

In my case, I run LVM (Logical Volume Manager) for my partitions.  This complicates the RAID setup, and I found that mdadm commands didn't work as expected.  If this situation occurs, it is best to boot Rocky Linux or CentOS in recovery mode using a Rocky Linux ISO or CentOS ISO.  Once the recovery system loads, drop to a shell without mounting any file systems.  Next, you will need to deactivate your LVM volume group:

vgdisplay
vgchange -a n my_volume_group # deactivate

Next, examine your md RAID array by running the following command:

cat /proc/mdstat

After running that command, I identied my RAID devices as md126 and md127.  /dev/md127 is considered the parent even though /dev/md126 is where everything is. 

I can get more information about the RAID array by running the below commands:

mdadm --detail /dev/md126
mdadm --detail /dev/md127

Let's fail and remove any removed (no longer existing) drives using this command:

mdadm /dev/md126 --remove failed
mdadm /dev/md126 --remove detached
mdadm /dev/md127 --remove failed
mdadm /dev/md127 --remove detached

Next, we need to identify the hard drive we want to add / replace the removed drive in the array:

lsblk

From running the above command, I noticed that the new drive was /dev/sde, so I needed to wipe its old RAID configuration (if there is any) and then add it to the RAID array.

wipefs /dev/sde
mdadm --add /dev/md127 /dev/sde

Check to see if the syncing process has started:

cat /proc/mdstat

You may or may not need to run the below command to get the RAID device to start syncing to the new drive:

mdadm --grow /dev/md126 --raid-devices=4

Helpful Links:

https://delightlylinux.wordpress.com/2020/12/22/how-to-remove-a-drive-from-a-raid-array/
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1
https://unix.stackexchange.com/questions/53129/dev-md127-refuses-to-stop-no-open-files
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_activate
https://serverfault.com/questions/676638/mdadm-drive-replacement-shows-up-as-spare-and-refuses-to-sync
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1

CentOS LVM and Software RAID Partitioning Instructions

Sunday, May 30th, 2021

Installing and Configuring CentOS to Host KVM Virtual Machines

GUI

When configuring a fresh install of CentOS for a KVM host machine (the main server that hosts all of the virtual machines), I like to run a GUI to make managing some of the virtual machines easier.  Thus, during install, choose the options for CentOS with Minimal GUI:

RAID 10 LVM Partitions

When configuring the hard drive partitions, set it up to use RAID 10 LVM SOFTWARE RAID:

Create volume group called "vms" without the quotes that is setup as RAID 10 (set volume group space to be as large as possible).

Set the "/" partition to 100GB XFS LVM (RAID10).

Set the "swap" partition to 32GB.

Only setup those two partitions.  The remaining space in the RAID 10 volume group "vms" will be used for KVM containers (and the remaining space does NOT need to be assigned to any mount points).

That's all.

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

Friday, April 30th, 2021

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

CentOS 8 and Red Hat Linux 8 removed a lot of built in RAID controller and SAS drivers.  As such, you'll need to identify your SAS RAID controller card model number, and then during the installation of CentOS 8 or Red Hat, you will need to follow these instructions (modifying them for your hardware).

https://gainanov.pro/eng-blog/linux/rhel8-install-to-dell-raid/

If for some reason the link above is no longer available, I saved and archived a copy which can be read here.

Add El Repo Permanently

As updates are released to CentOS 8 / Rocky Linux / Red Hat 8, the kernel will often be upgraded.  To make sure the SAS drives are updated as well, you'll need to configure your system to pull updates from El Repo automatically by using the following commands:

sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
sudo yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
sudo yum update -y

In case the above instructions no longer work, this guide should help.

Disable NetworkManager Wait Online Service

Prevent the boot from being halted on startup by network connection checks by running the below command:

sudo systemctl mask NetworkManager-wait-online.service

The Dangers of Using tcp_tw_recycle in Linux – Strange Intermittent Timeout Issues

Wednesday, January 13th, 2021

Do Not Use tcp_tw_recycle

I had a very strange connectivity issue recently that I was only able to reproduce intermittently on my own LAN network when connecting to a few of my servers hosting websites that process and receive tons of simultaneous connections at any point in time.  Basically, my connection to a specific set of websites that I host would timeout from my home internet connection.  However, I was never able to reproduce this issue when connecting to the same sites from other networks belonging to my family and friends. 

From my home connection, I used TCPView and saw that SYN_SENT packets were supposedly sent to my servers to establish a connection.  Unfortunately, the server never replied to some of these requests.  As such, my connection would timeout at times, and work perfectly fine sometimes.  I looked at DD-WRT's connection table, and it also claimed that the packets had been sent, but that they were in an UNREPLIED state when I experienced issues.  Thus, packets were supposedly being sent, but the server was not responding at times.  After spending nearly a week trying to tackle this issue and buying new cable internet equipment (an officially supported Comcast modem), I tracked down the issue, and it ended up being a TCP configuration setting on my servers rather than my home LAN equipment.

Modem or Router's Fault?

Originally, I thought my issue was caused by the DD-WRT open source firmware I was running on my wireless router.  If I restored the router's settings to DD-WRT's factory defaults, I could always connect to the websites I was having intermittent connection timeout issues on.  I suspected it might be my router after trying an older router which didn't have any problems either.  I even upgraded the DD-WRT firmware to the latest version and rebuilt my complicated network configuration from scratch.  Unfortunately, the issue was still there.  Thus, despite mixed results with different routers, I started to wonder if the issue was on my server's end.

Finally Fixed

I started looking at sysctl TCP settings I could adjust on my router, and I ended up comparing some of these values to the ones used on my servers (that were hosting the problem websites).  Eventually, I came across configuration values I had changed myself several months ago which were supposed to help the server support multiple simultaneous connections.

After reading this StackOverflow thread (https://stackoverflow.com/questions/6426253/tcp-tw-reuse-vs-tcp-tw-recycle-which-to-use-or-both), I decided I would try disabling the tcp_tw_recycle setting.

/proc/sys/net/ipv4/tcp_tw_recycle was set to 1 (enabled) from tweaks I had run that I had found on the internet.  After I disabled it, /proc/sys/net/ipv4/tcp_tw_recycle was set to 0 (disabled).  By default, Linux keeps tcp_tw_recycle disabled.  Again, this is something I had changed for tuning reasons.  After disabling this setting and rebooting the server, I no longer have any issues connecting to the severs in question.  No more connection timeouts, and everything works properly again.

I have no idea why I wasn't able to reproduce this issue on other networks.  I thought it was my network equipment (modem and router), but it ended up being the server.

Lessons Learned

Be careful when applying settings you find online.  Sometimes, they may not work, or their usage may be buggy.  In fact, net.ipv4.tcp_tw_recycle has been removed from Linux in kernel versions newer than 4.12 by default.  I'm guessing this is because it doesn't work, as I experienced.  Do NOT use  net.ipv4.tcp_tw_recycle! I kept tcp_tw_reuse enabled, so you can enable this setting without running into problems.  Just don't for the love of anything use tcp_tw_recycle!  It doesn't work, and it will cause you headaches trying to track down strange intermittent issues!

 

Linux Multiple Network Interfaces (NICs) – One Interface with Static Public IP and One Interface with Private DHCP LAN IP Address – Routes and Routing

Friday, July 24th, 2020

Linux KVM:  Using Multiple NICs and Routing Traffic Properly Between Them

When setting up a KVM guest to use multiple network interface controllers (NICs), additional ip routes may be needed in order for the additional interfaces to work properly.  For example, if you configure a NIC with a public static IP address and a NIC with an internal private DHCP LAN IP address, you must create a route for any traffic that comes through the DHCP LAN IP address to respond via the interface from which the request originated.  Otherwise, forwarded NAT traffic from the main KVM host to the DHCP internal LAN IP will reach its destination, but no response will be sent back (because it will attempt to send the response via the configured static IP address interface which may NOT be the original destination of the senders request).

The Solution:

https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming/23345#answer-23345

From the above link, the solution for me was to do the following in the KVM guest virtual machine:

Only needs to be done once:

sudo -i
echo 200 isp1 >> /etc/iproute2/rt_tables

Setting up the route (adjust variables as necessary):

sudo -i
ip rule add from <interface_IP> table isp1 priority 900
ip rule add from <interface_IP> dev <interface> table isp1
ip route add default via <gateway_IP> dev <interface> table isp

The command I used for my specific setup:

sudo -i
ip rule add from 192.168.122.10 table isp1 priority 900 
ip rule add from 192.168.122.10 dev ens9 table isp1 
ip route add default via 192.168.122.1 dev ens9 table isp1

Making it permanent (apply on system start up):

sudo -i
nano /etc/network/interfaces

I added the below post-up rules (adjust variables as necessary):

auto ens9
iface ens9 inet dhcp
        post-up ip rule add from <interface_IP> table isp1 priority 900
        post-up ip rule add from <interface_IP> dev <interface> table isp1
        post-up ip route add default via <gateway_IP> dev <interface> table isp1

The route is created whenever the dhcp interface is brought up.

Installing Chrome WebDriver (Linux Script)

Wednesday, August 28th, 2019

Installing Chrome WebDriver (Linux Script)

Find out which version of Chrome is installed on your system before running the below commands.  You can find out your chrome version by running the following command:

google-chrome --version

Adjust the version number (replace {VERSION_NUMBER})  in the below commands to match the version installed on your system!!!

sudo -i
cd ~/Downloads
rm chromedriver_linux64.zip
wget -N https://chromedriver.storage.googleapis.com/{VERSION_NUMBER}/chromedriver_linux64.zip
unzip chromedriver_linux64.zip
mv chromedriver /usr/bin/chromedriver
chown root:root /usr/bin/chromedriver
chmod +x /usr/bin/chromedriver

Selenium and other libraries that rely on the Chrome WebDriver should now work properly.

Change the Default Editor to nano in Linux

Saturday, April 27th, 2019

Use nano as the Default Editor

If you hate vi like I do, you can configure Linux to always default to using the nano editor.

Simply add the following to the bottom of the /etc/bashrc file:

export EDITOR="nano"

Save the file.  nano is now the default editor.  When you use

sudo crontab -e

The nano editor will now be used by default.

Ubuntu Grub Fails to Install on RAID Array

Friday, February 6th, 2015

Ubuntu Grub RAID Issues

Grub Fails To Install on RAID Array

If grub fails to install on your RAID array in any version of Ubuntu, do NOT disable your BIOS RAID! The correct solution is at this blog entry. I'll summarize it below.

At the stage of the install where it is attempting to install GRUB it will detect as

/dev/mapper

This is incomplete! That's why the GRUB install fails.

You need the actual name of the RAID array to install to. So during that step, press ctrl+alt+F2 to drop to a busybox terminal, then enter

ls -l /dev/mapper

Pick out the name of your array from the list shown, then press ctrl+alt+F1 to switch back to the install (you can switch back and forth as much as you like with no problems) and enter it in the field as

/dev/mapper/{your array name}  

Then GRUB installs perfectly and you're ready to go, with a proper BIOS RAID array intact.

System Won't Boot After Grub Failed to Install

If your system will no longer boot because you skipped installing or updating grub, you need to download an Ubuntu version that does support RAID, boot from the LIVE CD, drop to a terminal, and then run:

ls -l /dev/mapper
sudo grub-install /dev/mapper/{ARRAY_NAME_HERE}

Setting Up RAID Array During Ubuntu Install

If you are configuring a BIOS RAID array for the first time on Ubuntu, you should create a 1MB boot partition.  Its partition type is "boot".  If you do this, grub will always try to install there and will succeed every time without failing when upgrading or reinstalling grub.