Archive for the ‘Servers’ Category

Testing Simultaneous HTTP Requests using cURL

Tuesday, February 27th, 2024

Testing Simultaneous HTTP Requests in Parallel using cURL

If you're developing a web application and are worried that a similar HTTP request could come in multiple times from different users (or clients) exactly at the same time, you can see how your application will behave by using cURL (on modern versions of Linux) to create this rare (if almost impossible) circumstance:

curl --parallel --parallel-immediate --parallel-max 3 --header "Content-Type: application/json" --request POST --data '{STRINGIFIED_JSON_PAYLOAD}' --config urls.txt

The urls.txt file will contain the URL to your API endpoint you're testing the same number of times as the –parallel-max parameter.  So in our case, it would contain:

url = "https://pathtoapiendpoint"
url = "https://pathtoapiendpoint"
url = "https://pathtoapiendpoint"

Check how your application behaves and make appropriate changes to maintain concurrency if you're worried about this happening.  There are various approaches you can use to make sure concurrency is maintained such as appending the old value of the database record into your where clause when updating to see if the data has already been changed within the timeframe of the request being processed. 

Or, you can use persistent virtual columns like this:

https://stackoverflow.com/questions/54338201/mysql-prevent-insertion-of-record-if-id-and-timestamp-unique-combo-constraint 

RAID Synchronization CRON Job Affecting Performance

Thursday, October 5th, 2023

RAID Synchronization CRON Job Affecting Performance

For some FakeRaid configurations, CentOS 7 and newer variants may run a RAID synchronization job configured in the /etc/cron.d directory in a file named raid-check.

This job is responsible for making sure the RAID array is in sync across all drives.  It runs by default every week on Sunday at 1 AM.

# Run system wide raid-check once a week on Sunday at 1am by default

However, this was not a convenient time for my users, as they were gaming at this time, so rather than affect server performance, I changed the cronjob to:

0 5 1 * * root /usr/bin/test $(/usr/bin/date +\%u) -ne 6 && /usr/sbin/raid-check

Thus, the sync job now runs once a month on the 1st at 5 AM.  And, it will not run if the day of the week is a Saturday.  This applies to several of my C1100 servers.

Manage Dell C1100 CS24-TY Servers Using IPMI Tool

Thursday, December 22nd, 2022

Manage Dell C1100 CS24-TY Servers Using IPMI Tool

Dell C1100 servers can be managed using the IPMI tool which interfaces with the low level BMC interface.  Dell has also released a bmc specific utility that compliments the generic ipmitool to better configure and manage specific settings that are difficult to configure using ipmitool on its own.

Here's how to use both ipmitool and the bmc utilities.  First, login as root:

sudo -i

Now, install the ipmitool:

yum install OpenIPMI ipmitool -y

Start the ipmi service:

service ipmi start

Download the bmc utility:

mkdir -p ~/Downloads/bmc
cd ~/Downloads/bmc
wget -N "http://dinofly.com/files/dell_c1100/bmc-2014-10-15.zip"

If your system is running versions of PERL 5.26 or newer, download this version instead:

# For PERL 5.26+
wget -O "bmc-2014-10-15.zip" -N "http://dinofly.com/files/dell_c1100/bmc-2014-10-15_perl_5.26.zip"

Unzip the bmc tool and make it executable:

unzip bmc-2014-10-15.zip
chmod +x bmc

Here's how to change the BMC interface IP settings:

ipmitool lan set 1 ipsrc static
ipmitool lan set 1 ipaddr x.x.x.x
ipmitool lan set 1 netmask x.x.x.x
ipmitool lan set 1 defgw ipaddr x.x.x.x
./bmc nic_mode set [dedicated|shared]
./bmc allinfo

If the BMC web interace is not responding, try this:

ipmitool bmc reset cold

Support Older TLS Versions in Newer Ubuntu / Debian OS Versions

Monday, December 5th, 2022

Support Older TLS Versions in Newer Ubuntu / Debian OS Versions

Edit openssl.conf file:

sudo nano /etc/ssl/openssl.cnf

Add this line at the top:

openssl_conf = openssl_init

And add these lines at the end:

[openssl_init]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
CipherString = DEFAULT@SECLEVEL=1

https://askubuntu.com/questions/1233186/ubuntu-20-04-how-to-set-lower-ssl-security-level#answer-1296578

cURL and wget Issues on Ubuntu 16.04 – SSL: TLSV1_ALERT_PROTOCOL_VERSION

Monday, December 5th, 2022

cURL and wget Issues on Ubuntu 16.04

When using wget or curl to make HTTP requests from a no longer supported installation of Ubuntu 16.04 Xenial, if you get any of the following errors:

curl gnutls_handshake() failed: Error in protocol version
curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version  /home/mohan/mesg
[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727) 

The solution is to add SavOS Rob Savoury PPAs to get updated curl and wget packages:

sudo add-apt-repository ppa:savoury1/build-tools
sudo add-apt-repository ppa:savoury1/backports
sudo add-apt-repository ppa:savoury1/python
sudo add-apt-repository ppa:savoury1/encryption
sudo add-apt-repository ppa:savoury1/curl34
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install wget curl python2.7

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Wednesday, June 9th, 2021

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Enable remote logging on an Ubuntu server by configuring rsyslog to allow remote connections from port 514 (adjust as needed):

sudo nano /etc/rsyslog.conf

Uncomment the imudp and imtcp load module statements like so (adjusting as needed):

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")

Create a logging template and apply it only to remote hosts that start with "c-" (comcast connection remote host prefix [followed by the IP address of the device which can change])

# Comcast remote logging
$template remote-incoming-logs, "/var/log/remote_logs/%HOSTNAME%/%PROGRAMNAME%.$
if $fromhost startswith "c-" then -?remote-incoming-logs

Save and quit.

Restart the rsyslog daemon:

sudo service rsyslog restart

Remote logs will be stored in /var/log/remote_logs

Configure logrotate to process and rotate these logs automatically (so you don't lose them and have a history on them):

sudo nano /etc/logrotate.d/ddwrt

Paste these contents into the file:

/var/log/remote_logs/*.log /var/log/remote_logs/*/*.log {
    daily
    missingok
    compress
    delaycompress
    su syslog adm
}

Save and quit.

Everything has been configured, and remote logging should work from your DD-WRT router once you set the remote URL to your server's IPAddress:port combo and apply the changed settings.

CentOS LVM and Software RAID Partitioning Instructions

Sunday, May 30th, 2021

Installing and Configuring CentOS to Host KVM Virtual Machines

GUI

When configuring a fresh install of CentOS for a KVM host machine (the main server that hosts all of the virtual machines), I like to run a GUI to make managing some of the virtual machines easier.  Thus, during install, choose the options for CentOS with Minimal GUI:

RAID 10 LVM Partitions

When configuring the hard drive partitions, set it up to use RAID 10 LVM SOFTWARE RAID:

Create volume group called "vms" without the quotes that is setup as RAID 10 (set volume group space to be as large as possible).

Set the "/" partition to 100GB XFS LVM (RAID10).

Set the "swap" partition to 32GB.

Only setup those two partitions.  The remaining space in the RAID 10 volume group "vms" will be used for KVM containers (and the remaining space does NOT need to be assigned to any mount points).

That's all.

Increasing KVM Guest Hard Disk (Hard Drive) Space

Sunday, May 30th, 2021

Increasing KVM Guest Hard Disk (Hard Drive) Space

Increasing the hard drive space in a KVM guest can be rather tricky.  The first step is to shutdown (completely turn off) the guest machine by running the below command from the guest system:

sudo shutdown -h now

Once the guest machine has been turned off (verify it is off by using sudo virt-manager on the host machine to see if it's no longer running), on the host machine, resize the LVM partition by running the following command (and adjust the size as necessary):

sudo lvextend -L+78G /dev/vg_vps/utils

If you need help identifying the name of the disk your guest has been assigned, run this command from the host:

sudo virsh domblklist {VIRSH_NAME_OF_VIRTUAL_MACHINE}

For my example, I would use this command:

sudo virsh domblklist utils

From the host machine, download the GParted live ISO image for your system's architecture (x86 or x64).  Start virt-manager:

sudo virt-manager

Assign a CD drive to the virtual machine you're expanding the hard drive space for, and assign / mount the GParted ISO to it.  Change the boot order so that the KVM guest boots from the CD first.  Save your settings and start the KVM guest virtual machine.  Boot into GParted Live.  GParted will run automatically.  Use GParted to expand the partitions so that they make use of the added storage based on your own preferences.  Apply the resize operation.  Exit GParted and shutdown the virtual machine so that it's off again. Remove the CD drive from the boot options from virt-manager, and then start the KVM guest again. 

If Guest Doesn't Use LVM Partitioning

If your KVM guest virtual machine hasn't been configured to use LVM, the added hard drive space should already be available to your system.  Verify it has been expanded by again running the df -h command.  You're done!

If Guest Uses LVM

Let the OS boot.  From the guest, the file system needs to be resized itself.  You can do this by running the following command to see the current space allocated to your system's partitions:

df -h

You'll see a bunch of output similar to:

Filesystem                  Size  Used Avail Use% Mounted on
udev                        2.9G     0  2.9G   0% /dev
tmpfs                       597M  8.3M  589M   2% /run
/dev/mapper/utils--vg-root  127G   24G   98G  20% /
tmpfs                       3.0G     0  3.0G   0% /dev/shm
tmpfs                       5.0M     0  5.0M   0% /run/lock
tmpfs                       3.0G     0  3.0G   0% /sys/fs/cgroup
/dev/vda1                   720M   60M  624M   9% /boot
tmpfs                       597M     0  597M   0% /run/user/1000

You'll notice that the added hard drive space doesn't show up on any of the partitions.  However, it is available to be assigned to these partitions.  To assign additional space, you will need to resize it using these commands (run from the guest virtual machine… the machine you're resizing):

lvextend /dev/mapper/utils--vg-root -L +78G
resize2fs /dev/mapper/utils--vg-root

Obviously, you need to substitute the name of the LVM partition with the one from your system shown in your output of the df -h command.

Resources

https://tldp.org/HOWTO/LVM-HOWTO/extendlv.htmlMirror if Offline

https://sandilands.info/sgordon/increasing-kvm-virtual-machine-disk-using-lvm-ext4Mirror if Offline

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

Friday, April 30th, 2021

Adding SAS RAID Drivers to CentOS 8 and Red Hat Linux During Installation

CentOS 8 and Red Hat Linux 8 removed a lot of built in RAID controller and SAS drivers.  As such, you'll need to identify your SAS RAID controller card model number, and then during the installation of CentOS 8 or Red Hat, you will need to follow these instructions (modifying them for your hardware).

https://gainanov.pro/eng-blog/linux/rhel8-install-to-dell-raid/

If for some reason the link above is no longer available, I saved and archived a copy which can be read here.

Add El Repo Permanently

As updates are released to CentOS 8 / Rocky Linux / Red Hat 8, the kernel will often be upgraded.  To make sure the SAS drives are updated as well, you'll need to configure your system to pull updates from El Repo automatically by using the following commands:

sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
sudo yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
sudo yum update -y

In case the above instructions no longer work, this guide should help.

Disable NetworkManager Wait Online Service

Prevent the boot from being halted on startup by network connection checks by running the below command:

sudo systemctl mask NetworkManager-wait-online.service

The Dangers of Using tcp_tw_recycle in Linux – Strange Intermittent Timeout Issues

Wednesday, January 13th, 2021

Do Not Use tcp_tw_recycle

I had a very strange connectivity issue recently that I was only able to reproduce intermittently on my own LAN network when connecting to a few of my servers hosting websites that process and receive tons of simultaneous connections at any point in time.  Basically, my connection to a specific set of websites that I host would timeout from my home internet connection.  However, I was never able to reproduce this issue when connecting to the same sites from other networks belonging to my family and friends. 

From my home connection, I used TCPView and saw that SYN_SENT packets were supposedly sent to my servers to establish a connection.  Unfortunately, the server never replied to some of these requests.  As such, my connection would timeout at times, and work perfectly fine sometimes.  I looked at DD-WRT's connection table, and it also claimed that the packets had been sent, but that they were in an UNREPLIED state when I experienced issues.  Thus, packets were supposedly being sent, but the server was not responding at times.  After spending nearly a week trying to tackle this issue and buying new cable internet equipment (an officially supported Comcast modem), I tracked down the issue, and it ended up being a TCP configuration setting on my servers rather than my home LAN equipment.

Modem or Router's Fault?

Originally, I thought my issue was caused by the DD-WRT open source firmware I was running on my wireless router.  If I restored the router's settings to DD-WRT's factory defaults, I could always connect to the websites I was having intermittent connection timeout issues on.  I suspected it might be my router after trying an older router which didn't have any problems either.  I even upgraded the DD-WRT firmware to the latest version and rebuilt my complicated network configuration from scratch.  Unfortunately, the issue was still there.  Thus, despite mixed results with different routers, I started to wonder if the issue was on my server's end.

Finally Fixed

I started looking at sysctl TCP settings I could adjust on my router, and I ended up comparing some of these values to the ones used on my servers (that were hosting the problem websites).  Eventually, I came across configuration values I had changed myself several months ago which were supposed to help the server support multiple simultaneous connections.

After reading this StackOverflow thread (https://stackoverflow.com/questions/6426253/tcp-tw-reuse-vs-tcp-tw-recycle-which-to-use-or-both), I decided I would try disabling the tcp_tw_recycle setting.

/proc/sys/net/ipv4/tcp_tw_recycle was set to 1 (enabled) from tweaks I had run that I had found on the internet.  After I disabled it, /proc/sys/net/ipv4/tcp_tw_recycle was set to 0 (disabled).  By default, Linux keeps tcp_tw_recycle disabled.  Again, this is something I had changed for tuning reasons.  After disabling this setting and rebooting the server, I no longer have any issues connecting to the severs in question.  No more connection timeouts, and everything works properly again.

I have no idea why I wasn't able to reproduce this issue on other networks.  I thought it was my network equipment (modem and router), but it ended up being the server.

Lessons Learned

Be careful when applying settings you find online.  Sometimes, they may not work, or their usage may be buggy.  In fact, net.ipv4.tcp_tw_recycle has been removed from Linux in kernel versions newer than 4.12 by default.  I'm guessing this is because it doesn't work, as I experienced.  Do NOT use  net.ipv4.tcp_tw_recycle! I kept tcp_tw_reuse enabled, so you can enable this setting without running into problems.  Just don't for the love of anything use tcp_tw_recycle!  It doesn't work, and it will cause you headaches trying to track down strange intermittent issues!