Archive for the ‘CentOS’ Category

RAID Synchronization CRON Job Affecting Performance

Thursday, October 5th, 2023

RAID Synchronization CRON Job Affecting Performance

For some FakeRaid configurations, CentOS 7 and newer variants may run a RAID synchronization job configured in the /etc/cron.d directory in a file named raid-check.

This job is responsible for making sure the RAID array is in sync across all drives.  It runs by default every week on Sunday at 1 AM.

# Run system wide raid-check once a week on Sunday at 1am by default

However, this was not a convenient time for my users, as they were gaming at this time, so rather than affect server performance, I changed the cronjob to:

0 5 1 * * root /usr/bin/test $(/usr/bin/date +\%u) -ne 6 && /usr/sbin/raid-check

Thus, the sync job now runs once a month on the 1st at 5 AM.  And, it will not run if the day of the week is a Saturday.  This applies to several of my C1100 servers.

Manage Dell C1100 CS24-TY Servers Using IPMI Tool

Thursday, December 22nd, 2022

Manage Dell C1100 CS24-TY Servers Using IPMI Tool

Dell C1100 servers can be managed using the IPMI tool which interfaces with the low level BMC interface.  Dell has also released a bmc specific utility that compliments the generic ipmitool to better configure and manage specific settings that are difficult to configure using ipmitool on its own.

Here's how to use both ipmitool and the bmc utilities.  First, login as root:

sudo -i

Now, install the ipmitool:

yum install OpenIPMI ipmitool -y

Start the ipmi service:

service ipmi start

Download the bmc utility:

mkdir -p ~/Downloads/bmc
cd ~/Downloads/bmc
wget -N "http://dinofly.com/files/dell_c1100/bmc-2014-10-15.zip"

If your system is running versions of PERL 5.26 or newer, download this version instead:

# For PERL 5.26+
wget -O "bmc-2014-10-15.zip" -N "http://dinofly.com/files/dell_c1100/bmc-2014-10-15_perl_5.26.zip"

Unzip the bmc tool and make it executable:

unzip bmc-2014-10-15.zip
chmod +x bmc

Here's how to change the BMC interface IP settings:

ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr x.x.x.x
ipmitool lan set 1 netmask x.x.x.x
ipmitool lan set 1 defgw ipaddr x.x.x.x
./bmc nic_mode set [dedicated|shared]
./bmc allinfo

If the BMC web interace is not responding, try this:

ipmitool bmc reset cold

Rebuilding a Removed / Failed RAID 10 Array in CentOS / Rocky Linux

Tuesday, February 22nd, 2022

Replace Hard Drive in a RAID 10 Array and Sync the RAID 10 Array to the New Hard Drive

I had the hardest time rebuilding a RAID 10 array after replacing a hard drive.  I didn't fail the old hard drive before removing it from the array, and sometimes, this may not be an option.  What happened in my case is the data center replaced the hard drive that I had shipped to them directly from an eBay seller.  I was hoping that the RAID array would rebuild itself onto the new drive (as I have seen happen before in some circumstances).  However, that may not happen if the replacement drive still has its old RAID array or partition information present, and then, it might be difficult to actually get the RAID array to sync to the new drive. 

In my case, I run LVM (Logical Volume Manager) for my partitions.  This complicates the RAID setup, and I found that mdadm commands didn't work as expected.  If this situation occurs, it is best to boot Rocky Linux or CentOS in recovery mode using a Rocky Linux ISO or CentOS ISO.  Once the recovery system loads, drop to a shell without mounting any file systems.  Next, you will need to deactivate your LVM volume group:

vgdisplay
vgchange -a n my_volume_group # deactivate

Next, examine your md RAID array by running the following command:

cat /proc/mdstat

After running that command, I identied my RAID devices as md126 and md127.  /dev/md127 is considered the parent even though /dev/md126 is where everything is. 

I can get more information about the RAID array by running the below commands:

mdadm --detail /dev/md126
mdadm --detail /dev/md127

Let's fail and remove any removed (no longer existing) drives using this command:

mdadm /dev/md126 --remove failed
mdadm /dev/md126 --remove detached
mdadm /dev/md127 --remove failed
mdadm /dev/md127 --remove detached

Next, we need to identify the hard drive we want to add / replace the removed drive in the array:

lsblk

From running the above command, I noticed that the new drive was /dev/sde, so I needed to wipe its old RAID configuration (if there is any) and then add it to the RAID array.

wipefs /dev/sde
mdadm --add /dev/md127 /dev/sde

Check to see if the syncing process has started:

cat /proc/mdstat

You may or may not need to run the below command to get the RAID device to start syncing to the new drive:

mdadm --grow /dev/md126 --raid-devices=4

Helpful Links:

https://delightlylinux.wordpress.com/2020/12/22/how-to-remove-a-drive-from-a-raid-array/
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1
https://unix.stackexchange.com/questions/53129/dev-md127-refuses-to-stop-no-open-files
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_activate
https://serverfault.com/questions/676638/mdadm-drive-replacement-shows-up-as-spare-and-refuses-to-sync
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1

CentOS LVM and Software RAID Partitioning Instructions

Sunday, May 30th, 2021

Installing and Configuring CentOS to Host KVM Virtual Machines

GUI

When configuring a fresh install of CentOS for a KVM host machine (the main server that hosts all of the virtual machines), I like to run a GUI to make managing some of the virtual machines easier.  Thus, during install, choose the options for CentOS with Minimal GUI:

RAID 10 LVM Partitions

When configuring the hard drive partitions, set it up to use RAID 10 LVM SOFTWARE RAID:

Create volume group called "vms" without the quotes that is setup as RAID 10 (set volume group space to be as large as possible).

Set the "/" partition to 100GB XFS LVM (RAID10).

Set the "swap" partition to 32GB.

Only setup those two partitions.  The remaining space in the RAID 10 volume group "vms" will be used for KVM containers (and the remaining space does NOT need to be assigned to any mount points).

That's all.

Increasing KVM Guest Hard Disk (Hard Drive) Space

Sunday, May 30th, 2021

Increasing KVM Guest Hard Disk (Hard Drive) Space

Increasing the hard drive space in a KVM guest can be rather tricky.  The first step is to shutdown (completely turn off) the guest machine by running the below command from the guest system:

sudo shutdown -h now

Once the guest machine has been turned off (verify it is off by using sudo virt-manager on the host machine to see if it's no longer running), on the host machine, resize the LVM partition by running the following command (and adjust the size as necessary):

sudo lvextend -L+78G /dev/vg_vps/utils

If you need help identifying the name of the disk your guest has been assigned, run this command from the host:

sudo virsh domblklist {VIRSH_NAME_OF_VIRTUAL_MACHINE}

For my example, I would use this command:

sudo virsh domblklist utils

From the host machine, download the GParted live ISO image for your system's architecture (x86 or x64).  Start virt-manager:

sudo virt-manager

Assign a CD drive to the virtual machine you're expanding the hard drive space for, and assign / mount the GParted ISO to it.  Change the boot order so that the KVM guest boots from the CD first.  Save your settings and start the KVM guest virtual machine.  Boot into GParted Live.  GParted will run automatically.  Use GParted to expand the partitions so that they make use of the added storage based on your own preferences.  Apply the resize operation.  Exit GParted and shutdown the virtual machine so that it's off again. Remove the CD drive from the boot options from virt-manager, and then start the KVM guest again. 

If Guest Doesn't Use LVM Partitioning

If your KVM guest virtual machine hasn't been configured to use LVM, the added hard drive space should already be available to your system.  Verify it has been expanded by again running the df -h command.  You're done!

If Guest Uses LVM

Let the OS boot.  From the guest, the file system needs to be resized itself.  You can do this by running the following command to see the current space allocated to your system's partitions:

df -h

You'll see a bunch of output similar to:

Filesystem                  Size  Used Avail Use% Mounted on
udev                        2.9G     0  2.9G   0% /dev
tmpfs                       597M  8.3M  589M   2% /run
/dev/mapper/utils--vg-root  127G   24G   98G  20% /
tmpfs                       3.0G     0  3.0G   0% /dev/shm
tmpfs                       5.0M     0  5.0M   0% /run/lock
tmpfs                       3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/vda1                   720M   60M  624M   9% /boot
tmpfs                       597M     0  597M   0% /run/user/1000

You'll notice that the added hard drive space doesn't show up on any of the partitions.  However, it is available to be assigned to these partitions.  To assign additional space, you will need to resize it using these commands (run from the guest virtual machine… the machine you're resizing):

lvextend /dev/mapper/utils--vg-root -L +78G
resize2fs /dev/mapper/utils--vg-root

Obviously, you need to substitute the name of the LVM partition with the one from your system shown in your output of the df -h command.

Resources

https://tldp.org/HOWTO/LVM-HOWTO/extendlv.htmlMirror if Offline

https://sandilands.info/sgordon/increasing-kvm-virtual-machine-disk-using-lvm-ext4Mirror if Offline

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

Friday, April 30th, 2021

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

CentOS 8 and Red Hat Linux 8 removed a lot of built in RAID controller and SAS drivers.  As such, you'll need to identify your SAS RAID controller card model number, and then during the installation of CentOS 8 or Red Hat, you will need to follow these instructions (modifying them for your hardware).

https://gainanov.pro/eng-blog/linux/rhel8-install-to-dell-raid/

If for some reason the link above is no longer available, I saved and archived a copy which can be read here.

Add El Repo Permanently

As updates are released to CentOS 8 / Rocky Linux / Red Hat 8, the kernel will often be upgraded.  To make sure the SAS drives are updated as well, you'll need to configure your system to pull updates from El Repo automatically by using the following commands:

sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
sudo yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
sudo yum update -y

In case the above instructions no longer work, this guide should help.

Disable NetworkManager Wait Online Service

Prevent the boot from being halted on startup by network connection checks by running the below command:

sudo systemctl mask NetworkManager-wait-online.service

Linux Multiple Network Interfaces (NICs) – One Interface with Static Public IP and One Interface with Private DHCP LAN IP Address – Routes and Routing

Friday, July 24th, 2020

Linux KVM:  Using Multiple NICs and Routing Traffic Properly Between Them

When setting up a KVM guest to use multiple network interface controllers (NICs), additional ip routes may be needed in order for the additional interfaces to work properly.  For example, if you configure a NIC with a public static IP address and a NIC with an internal private DHCP LAN IP address, you must create a route for any traffic that comes through the DHCP LAN IP address to respond via the interface from which the request originated.  Otherwise, forwarded NAT traffic from the main KVM host to the DHCP internal LAN IP will reach its destination, but no response will be sent back (because it will attempt to send the response via the configured static IP address interface which may NOT be the original destination of the senders request).

The Solution:

https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming/23345#answer-23345

From the above link, the solution for me was to do the following in the KVM guest virtual machine:

Only needs to be done once:

sudo -i
echo 200 isp1 >> /etc/iproute2/rt_tables

Setting up the route (adjust variables as necessary):

sudo -i
ip rule add from <interface_IP> table isp1 priority 900
ip rule add from <interface_IP> dev <interface> table isp1
ip route add default via <gateway_IP> dev <interface> table isp

The command I used for my specific setup:

sudo -i
ip rule add from 192.168.122.10 table isp1 priority 900 
ip rule add from 192.168.122.10 dev ens9 table isp1 
ip route add default via 192.168.122.1 dev ens9 table isp1

Making it permanent (apply on system start up):

sudo -i
nano /etc/network/interfaces

I added the below post-up rules (adjust variables as necessary):

auto ens9
iface ens9 inet dhcp
        post-up ip rule add from <interface_IP> table isp1 priority 900
        post-up ip rule add from <interface_IP> dev <interface> table isp1
        post-up ip route add default via <gateway_IP> dev <interface> table isp1

The route is created whenever the dhcp interface is brought up.

CentOS – Using NAT with KVM Guests

Wednesday, July 17th, 2019

CentOS – Using NAT with KVM Guests

Please note that all commands in this guide must be run on the main HOST machine (the physical machine).  They should not be run on KVM guests (virtual machines).

If your server has a limited number of IPv4 addresses, it might be best to setup and run some virtual machines that are configured to use the default NAT network interface that KVM provides.  This will allow you to run multiple virtual machines that share the same IP address.  Think of it as setting up a home network with multiple devices with certain ports forwarded to specific devices for incoming connections.

First, create the virtual machines that are going to use NAT using virt-manager as you normally would.  In the virtual machine configuration wizard, assign the default NAT network interface named "virbr0".  After the virtual machines have been created, shut down the virtual machines.  Now, we'll assign these virtual machines static LAN IP addresses so that we can port forward certain ports and always have them reach the proper virtual machines. 

The first thing we need to do is get the MAC address of each virtual machine.  Write down the name of the virtual machine and its MAC address, as we'll need this information later on when we edit the NAT interface and assign static LAN IP addresses to our virtual machines.  Run this command to retrieve the MAC address for a specific virtual machine.

virsh dumpxml VM_NAME | grep -i '<mac'

Using the MAC address information from the VMs we want to use NAT with, edit the default NAT interface by running the below command. 

virsh net-edit default

If for some reason the NAT interface is not named default, you can find it by running the below command:

virsh net-list

After the <range /> entry, assign the static LAN IP addresses similar to the following:

<dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
      <host mac='52:54:00:ff:4a:2a' name='vm1' ip='192.168.122.13'/>
      <host mac='52:54:00:bb:35:67' name='vm2' ip='192.168.122.14'/>
      <host mac='52:54:00:aa:d9:f2' name='vm3' ip='192.168.122.15'/>
</dhcp>

Save the file with your desired values and quit the editor.  Restart the NAT interface by running the below commands:

virsh net-destroy default
virsh net-start default

Now, you'll need to setup your iptables port forwarding rules.  Adjust the below rules as necessary (changing the port numbers to the ones you want to use) and then save them so that they persist:

iptables -I FORWARD -o virbr0 -d 192.168.122.13 -j ACCEPT
iptables -t nat -I PREROUTING -p tcp --dport 39989 -j DNAT --to 192.168.122.13:39989
iptables -I FORWARD -o virbr0 -d 192.168.122.14 -j ACCEPT
iptables -t nat -I PREROUTING -p tcp --dport 39990 -j DNAT --to 192.168.122.14:39990
iptables -I FORWARD -o virbr0 -d 192.168.122.15 -j ACCEPT
iptables -t nat -I PREROUTING -p tcp --dport 39991 -j DNAT --to 192.168.122.15:39991
iptables -t nat -A POSTROUTING -s 192.168.122.0/24 -j MASQUERADE
iptables -A FORWARD -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i virbr0 -o br0 -j ACCEPT
iptables -A FORWARD -i virbr0 -o lo -j ACCEPT
iptables-save
service iptables save

Congrats, your virtual machines are now using NAT, have been assigned static LAN IP addresses, and iptables rules on the host server have been configured to port forward specific ports to each NAT VM.


Persistently Saving NAT Port Forward Rules

The only solution I found that would persistently save my NAT forwarding rules is to create a libvirt hook bash script as mentioned here

service iptables stop
iptables -F
service iptables save
service iptables start
mkdir -p  /etc/libvirt/hooks
nano  /etc/libvirt/hooks/qemu

The contents of the "/etc/libvirt/hooks/qemu" file should look similar to the following:

#!/bin/bash
# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
if [ "${1}" = "vm1" ]; then   # Update the following variables to fit your setup
   GUEST_IP=192.168.122.13
   GUEST_PORT=39989
   HOST_PORT=39989   
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
    /sbin/iptables -D FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
    /sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
    /sbin/iptables -I FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
    /sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
fi
if [ "${1}" = "vm2" ]; then   # Update the following variables to fit your setup
   GUEST_IP=192.168.122.14
   GUEST_PORT=39990
   HOST_PORT=39990   
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -D FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
        /sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -I FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
        /sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
fi
if [ "${1}" = "vm3" ]; then   # Update the following variables to fit your setup
   GUEST_IP=192.168.122.15
   GUEST_PORT=39991
   HOST_PORT=39991
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -D FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
        /sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
        /sbin/iptables -I FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
        /sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
   fi
fi

Save and exit.  Make the script executable.

Here is an example of forwarding multiple ports with one entry:
 

if [ "${1}" = "vm_name" ]; then
   # Update the following variables to fit your setup
   GUEST_IP=192.168.122.3
   GUEST_PORTS=(15231 15232 15233)   
   if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
        for i in ${GUEST_PORTS[@]}; do
            /sbin/iptables -D FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
            #/sbin/iptables -t nat -D PREROUTING -p tcp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            /sbin/iptables -t nat -D PREROUTING -p tcp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            #/sbin/iptables -t nat -D PREROUTING -p udp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            #/sbin/iptables -t nat -D PREROUTING -p all -j DNAT --to $GUEST_IP:${i}
        done
   fi
   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
        for i in ${GUEST_PORTS[@]}; do
            /sbin/iptables -I FORWARD -o virbr0 -d  $GUEST_IP -j ACCEPT
            #/sbin/iptables -t nat -I PREROUTING -p tcp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            /sbin/iptables -t nat -I PREROUTING -p tcp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            #/sbin/iptables -t nat -I PREROUTING -p udp --dport ${i} -j DNAT --to $GUEST_IP:${i}
            #/sbin/iptables -t nat -I PREROUTING -p all -j DNAT --to $GUEST_IP:${i}
        done
   fi
fi
chmod +x /etc/libvirt/hooks/qemu

If you're attempting to forward incoming traffic (only traffic originating from the outside), be sure to exclude the main virbr0 interface so that requests originating from the DHCP LAN aren't forwarded to itself.  This is helpful for port forwarding DNS (port 53).  Incoming requests from outside of the network will be forwarded to the specified server, but outgoing DNS requests made from the server will NOT be forwarded back to itself.  Basically, you want to forward all traffic on port 53 to that specific server except for traffic that originates from that server.  So, be sure to add this line if that is the case:

/sbin/iptables -t nat -I PREROUTING \! -i virbr0 -p udp --dport 53 -j DNAT --to $GUEST_IP:$GUEST_PORT

More information can be found here:  https://serverfault.com/questions/1042383/kvm-port-forwarding-port-53-to-guest-via-nat-temporary-failure-in-name-resolut/1042394#1042394

Reboot the host server.


Old Instructions for Persistent Saving (Non-Working)

If your iptables forwarding rules are not persisted after the host machine is rebooted or shutdown, run the following commands:

sudo -i
yum install -y iptables-services
systemctl stop firewalld
systemctl disable firewalld
systemctl enable iptables
nano /etc/sysconfig/iptables-config 

Change the below values to "yes":

IPTABLES_SAVE_ON_RESTART="yes"
IPTABLES_SAVE_ON_STOP="yes"

Save and exit.  Reboot the server.

If you're still having issues, try this (will clear your existing iptables rules):

iptables-save > iptables_bk
service iptables stop
iptables -F
<run iptables NAT rules here>
<run any other iptables rules you want>
service iptables save
service iptables start

More Detailed Guide

Change the Default Editor to nano in Linux

Saturday, April 27th, 2019

Use nano as the Default Editor

If you hate vi like I do, you can configure Linux to always default to using the nano editor.

Simply add the following to the bottom of the /etc/bashrc file:

export EDITOR="nano"

Save the file.  nano is now the default editor.  When you use

sudo crontab -e

The nano editor will now be used by default.

Configuring Let’s Encrypt Certbot on CentOS 7 with lighttpd

Saturday, April 27th, 2019

Configuring Let's Encrypt Certbot on CentOS 7 with lighttpd

Installing Certbot

First, install certbot by using the below commands:

sudo yum -y install epel-release
sudo yum install certbot

certbot is python based program that allows you to request SSL certificates for your domains. 

Request a Certificate

Use the below command to request a certificate (adjust paths and replace the test.com domain as necessary):

sudo certbot certonly --webroot -w /var/www/vhosts/test/httpdocs -d test.com

A certificate has now been stored in /etc/letsencrypt/live.  Create a combined certificate format by using the below command (replacing test.com with your real domain):

/bin/cat /etc/letsencrypt/live/test.com/cert.pem /etc/letsencrypt/live/test.com/privkey.pem > /etc/letsencrypt/live/test.com/custom.pem && /bin/chmod 777 /etc/letsencrypt/live/test.com/custom.pem && /sbin/service lighttpd restart

Certificate Renewal Cronjobs

You may want to create a cronjob to renew the certificate and a cronjob for regenerating the combined format certificate since the underlying certificate file can change (such as when it's renewed):

sudo crontab -e

Insert the below cronjobs:

0 1 * * 1 /usr/bin/certbot renew --quiet
5 1 * * 1 /bin/cat /etc/letsencrypt/live/test.com/cert.pem /etc/letsencrypt/live/test.com/privkey.pem > /etc/letsencrypt/live/test.com/custom.pem && /bin/chmod 777 /etc/letsencrypt/live/test.com/custom.pem && /sbin/service lighttpd restart

Save your crontab configuration. 

Setting Up Lighttpd to Use SSL Certificate

Edit your default-enabled lighttpd configuration file in /etc/lighttpd/vhosts.d to look similar to the following (replacing test.com with your real domain and adjusting various file paths)

$HTTP["host"] == "test.com" {
  var.server_name = "test.com"
  server.name = server_name  server.document-root = vhosts_dir + "/test/httpdocs"
  #accesslog.filename          = vhosts_dir + "/test/log" + "/access.log"
}
$SERVER["socket"] == ":80" {
  server.document-root = vhosts_dir + "/test/httpdocs"
}
$SERVER["socket"] == ":443" {
    ssl.engine           = "enable"
    ssl.pemfile          = "/etc/letsencrypt/live/test.com/custom.pem"
    server.document-root = vhosts_dir + "/test/httpdocs"
    ssl.ca-file = "/etc/letsencrypt/live/test.com/chain.pem" # Root CA
    server.name = "test.com" # Domain Name OR Virtual Host Name
}

Here's how you can set a different document root for specific https (port 443) virtual hosts:

$SERVER["socket"] == ":443" {
    ssl.engine           = "enable"
    ssl.pemfile          = "/etc/letsencrypt/live/test.com/custom.pem"
    server.document-root = vhosts_dir + "/test/httpdocs/"
    ssl.ca-file = "/etc/letsencrypt/live/test.com/chain.pem" # Root CA
    server.name = "test.com" # Domain Name OR Virtual Host Name
    
    $HTTP["host"] =~ "(^|www\.)somethingelse.test.com" {
        server.document-root = vhosts_dir + "/test/httpdocs/subdir"
    }
}

Save and restart the lighttpd service.

sudo service lighttpd restart

Congrats, SSL is now available on your domain, and your Let's Encrypt certificate has been configured and will be renewed automatically by your cronjob.