Archive for the ‘Ubuntu Linux’ Category

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Wednesday, June 9th, 2021

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Enable remote logging on an Ubuntu server by configuring rsyslog to allow remote connections from port 514 (adjust as needed):

sudo nano /etc/rsyslog.conf

Uncomment the imudp and imtcp load module statements like so (adjusting as needed):

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")

Create a logging template and apply it only to remote hosts that start with "c-" (comcast connection remote host prefix [followed by the IP address of the device which can change])

# Comcast remote logging
$template remote-incoming-logs, "/var/log/remote_logs/%HOSTNAME%/%PROGRAMNAME%.$
if $fromhost startswith "c-" then -?remote-incoming-logs

Save and quit.

Restart the rsyslog daemon:

sudo service rsyslog restart

Remote logs will be stored in /var/log/remote_logs

Configure logrotate to process and rotate these logs automatically (so you don't lose them and have a history on them):

sudo nano /etc/logrotate.d/ddwrt

Paste these contents into the file:

/var/log/remote_logs/*.log /var/log/remote_logs/*/*.log {
    daily
    missingok
    compress
    delaycompress
    su syslog adm
}

Save and quit.

Everything has been configured, and remote logging should work from your DD-WRT router once you set the remote URL to your server's IPAddress:port combo and apply the changed settings.

Fixing Sound in Ubuntu

Wednesday, April 21st, 2021

Fixing Sound in Ubuntu

If your sound quits working randomly after installing updates via the apt system (via sudo apt-get update && apt-get upgrade) or via the Software & Updates graphical program, it's possible that some of the drivers have not been installed with the latest kernel updates.

To fix this, try running the below command:

sudo apt install linux-modules-extra-$(uname -r)

Reboot.  Your sound should hopefully work again!

If the above command doesn't work (older versions of Ubuntu do not have this package), please see the generic information here:

https://itsfoss.com/how-to-fix-no-sound-through-hdmi-in-external-monitor-in-ubuntu/

Intel Original Compute Stick – Keep WiFi from Breaking – Block Kernel Updates and Chestersmill-Settings Updates – Get Latest Software from Ubuntu Advantage

Wednesday, April 21st, 2021

Intel Original Compute Stick – Keep WiFi from Breaking – Block Kernel Updates and Chestersmill-Settings Updates

If you have the original Intel Compute Stick (STCK1A8LFC [1GB of RAM] or STCK1A32WFC [2GB of RAM] models from 2015), you'll need to prevent a few software packages from updating so that these updates won't break your WiFi!  I've never been able to get the WiFi to work with these Intel Compute Sticks running a kernel newer than version 3.16 on any version of Ubuntu.  I've also never been able to get the WiFi to work on anything but Ubuntu 14.04, so you might be stuck having to run this older version of Ubuntu.  Also, updates to the chestersmill-settings package can break your WiFi.  To prevent both scenarios from breaking your WiFi, simply prevent the below packages from being updated.

Preventing packages from being updated can be accomplished using hold statuses in the apt system as explained on AskUbuntu:

https://askubuntu.com/questions/18654/how-to-prevent-updating-of-a-specific-package

Basically, you'll need to run the below commands to prevent WiFi from breaking due to buggy kernel and chestersmill-settings package updates:

sudo apt-mark hold chestersmill-settings
sudo apt-mark hold linux-image-$(uname -r)
sudo apt-mark hold linux-image-generic
sudo apt-mark hold linux-generic

You can now safely update software packages without worrying about your WiFi being broken by kernel and chestersmill-settings package updates!

Get Latest Software from Ubuntu Advantage for Ubuntu 14.04

Ubuntu 14.04 is also still supported via the Extended Security Maintenance (ESM) Ubuntu Advantage program (https://ubuntu.com/advantage).  

To get updated software and packages until April of 2022, you'll need to get an Ubuntu Advantage key.  Get your key using your UbuntuOne account on this page:  https://ubuntu.com/advantage

Now, install the Ubuntu advantage client by using the commands below:

sudo add-apt-repository ppa:ua-client/stable
sudo apt-get update
sudo apt-get install ubuntu-advantage-tools

Set your key using the command below (using your key rather than YOUR_KEY_HERE):

sudo ua attach YOUR_KEY_HERE

Now update and install the upgraded packages available via Ubuntu Advantage:

sudo apt-get update && sudo apt-get upgrade

The Dangers of Using tcp_tw_recycle in Linux – Strange Intermittent Timeout Issues

Wednesday, January 13th, 2021

Do Not Use tcp_tw_recycle

I had a very strange connectivity issue recently that I was only able to reproduce intermittently on my own LAN network when connecting to a few of my servers hosting websites that process and receive tons of simultaneous connections at any point in time.  Basically, my connection to a specific set of websites that I host would timeout from my home internet connection.  However, I was never able to reproduce this issue when connecting to the same sites from other networks belonging to my family and friends. 

From my home connection, I used TCPView and saw that SYN_SENT packets were supposedly sent to my servers to establish a connection.  Unfortunately, the server never replied to some of these requests.  As such, my connection would timeout at times, and work perfectly fine sometimes.  I looked at DD-WRT's connection table, and it also claimed that the packets had been sent, but that they were in an UNREPLIED state when I experienced issues.  Thus, packets were supposedly being sent, but the server was not responding at times.  After spending nearly a week trying to tackle this issue and buying new cable internet equipment (an officially supported Comcast modem), I tracked down the issue, and it ended up being a TCP configuration setting on my servers rather than my home LAN equipment.

Modem or Router's Fault?

Originally, I thought my issue was caused by the DD-WRT open source firmware I was running on my wireless router.  If I restored the router's settings to DD-WRT's factory defaults, I could always connect to the websites I was having intermittent connection timeout issues on.  I suspected it might be my router after trying an older router which didn't have any problems either.  I even upgraded the DD-WRT firmware to the latest version and rebuilt my complicated network configuration from scratch.  Unfortunately, the issue was still there.  Thus, despite mixed results with different routers, I started to wonder if the issue was on my server's end.

Finally Fixed

I started looking at sysctl TCP settings I could adjust on my router, and I ended up comparing some of these values to the ones used on my servers (that were hosting the problem websites).  Eventually, I came across configuration values I had changed myself several months ago which were supposed to help the server support multiple simultaneous connections.

After reading this StackOverflow thread (https://stackoverflow.com/questions/6426253/tcp-tw-reuse-vs-tcp-tw-recycle-which-to-use-or-both), I decided I would try disabling the tcp_tw_recycle setting.

/proc/sys/net/ipv4/tcp_tw_recycle was set to 1 (enabled) from tweaks I had run that I had found on the internet.  After I disabled it, /proc/sys/net/ipv4/tcp_tw_recycle was set to 0 (disabled).  By default, Linux keeps tcp_tw_recycle disabled.  Again, this is something I had changed for tuning reasons.  After disabling this setting and rebooting the server, I no longer have any issues connecting to the severs in question.  No more connection timeouts, and everything works properly again.

I have no idea why I wasn't able to reproduce this issue on other networks.  I thought it was my network equipment (modem and router), but it ended up being the server.

Lessons Learned

Be careful when applying settings you find online.  Sometimes, they may not work, or their usage may be buggy.  In fact, net.ipv4.tcp_tw_recycle has been removed from Linux in kernel versions newer than 4.12 by default.  I'm guessing this is because it doesn't work, as I experienced.  Do NOT use  net.ipv4.tcp_tw_recycle! I kept tcp_tw_reuse enabled, so you can enable this setting without running into problems.  Just don't for the love of anything use tcp_tw_recycle!  It doesn't work, and it will cause you headaches trying to track down strange intermittent issues!

 

Linux Multiple Network Interfaces (NICs) – One Interface with Static Public IP and One Interface with Private DHCP LAN IP Address – Routes and Routing

Friday, July 24th, 2020

Linux KVM:  Using Multiple NICs and Routing Traffic Properly Between Them

When setting up a KVM guest to use multiple network interface controllers (NICs), additional ip routes may be needed in order for the additional interfaces to work properly.  For example, if you configure a NIC with a public static IP address and a NIC with an internal private DHCP LAN IP address, you must create a route for any traffic that comes through the DHCP LAN IP address to respond via the interface from which the request originated.  Otherwise, forwarded NAT traffic from the main KVM host to the DHCP internal LAN IP will reach its destination, but no response will be sent back (because it will attempt to send the response via the configured static IP address interface which may NOT be the original destination of the senders request).

The Solution:

https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming/23345#answer-23345

From the above link, the solution for me was to do the following in the KVM guest virtual machine:

Only needs to be done once:

sudo -i
echo 200 isp1 >> /etc/iproute2/rt_tables

Setting up the route (adjust variables as necessary):

sudo -i
ip rule add from <interface_IP> table isp1 priority 900
ip rule add from <interface_IP> dev <interface> table isp1
ip route add default via <gateway_IP> dev <interface> table isp

The command I used for my specific setup:

sudo -i
ip rule add from 192.168.122.10 table isp1 priority 900 
ip rule add from 192.168.122.10 dev ens9 table isp1 
ip route add default via 192.168.122.1 dev ens9 table isp1

Making it permanent (apply on system start up):

sudo -i
nano /etc/network/interfaces

I added the below post-up rules (adjust variables as necessary):

auto ens9
iface ens9 inet dhcp
        post-up ip rule add from <interface_IP> table isp1 priority 900
        post-up ip rule add from <interface_IP> dev <interface> table isp1
        post-up ip route add default via <gateway_IP> dev <interface> table isp1

The route is created whenever the dhcp interface is brought up.

Obtaining Let’s Encrypt HTTP Validation IP Addresses

Saturday, July 11th, 2020

Obtaining Let's Encrypt HTTP Validation Server IP Addresses

Use your webserver logs:

sudo apt-get install john
cat access_log.1 | grep "Let's Encrypt" | awk '{print $1}' | unique ips
cat ips

Installing Chrome WebDriver (Linux Script)

Wednesday, August 28th, 2019

Installing Chrome WebDriver (Linux Script)

Find out which version of Chrome is installed on your system before running the below commands.  You can find out your chrome version by running the following command:

google-chrome --version

Adjust the version number (replace {VERSION_NUMBER})  in the below commands to match the version installed on your system!!!

sudo -i
cd ~/Downloads
rm chromedriver_linux64.zip
wget -N https://chromedriver.storage.googleapis.com/{VERSION_NUMBER}/chromedriver_linux64.zip
unzip chromedriver_linux64.zip
mv chromedriver /usr/bin/chromedriver
chown root:root /usr/bin/chromedriver
chmod +x /usr/bin/chromedriver

Selenium and other libraries that rely on the Chrome WebDriver should now work properly.

Installing the Newest Version of Python 2.7.x on Older Versions of Ubuntu (like 14.04 and 16.04)

Thursday, May 9th, 2019

Installing the Newest Version of Python 2.7.x on Older Ubuntu Systems

If you need to upgrade to the newest version of Python 2.7.x, and you're running an older distribution (like Ubuntu 14.04), use the following commands to get and compile the latest version from source (works on Ubuntu 17.04 and older – tested on Ubuntu 14.04):

sudo apt-get install build-essential checkinstall
sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
version=2.7.18
cd ~/Downloads/
wget https://www.python.org/ftp/python/$version/Python-$version.tgz
tar -xvf Python-$version.tgz
cd Python-$version
./configure --with-ensurepip=install
make
sudo make install

Install requests and hashlib:

sudo rm /usr/lib/python2.7/dist-packages/chardet*.egg-info
sudo rm -r /usr/lib/python2.7/dist-packages/chardet
sudo rm /usr/lib/python2.7/lib-dynload/_hashlib.x86_64-linux-gnu.so
sudo rm /usr/lib/python2.7/lib-dynload/_hashlib.i386-linux-gnu.so
sudo pip install requests
sudo easy_install hashlib

You may need to create a symlink for chardet after installing it directly from pip:

ln -sf /usr/local/lib/python2.7/site-packages/chardet /usr/lib/python2.7/dist-packages/chardet

If you get the error of "ImportError: cannot import name _remove_dead_weakref" when running a pam python based authentication script after installing the new version of python, try this fix:

sudo cp /usr/local/lib/python2.7/weakref.py /usr/local/lib/python2.7/weakref_old.py
sudo cp /usr/lib/python2.7/weakref.py /usr/local/lib/python2.7/weakref.py

Getting Let's Encrypt Certbot to Work:

Now, you'll need to delete the EFF directory from the /opt directory to avoid old configuration issues that were used for your older version of python.  Once you cleanup this directory, you'll run certbot again so it can reconfigure itself. 

sudo rm -r /opt/eff.org/
sudo certbot

Old Way

jonathonf is now a very greedy person and has made his repositories private, so this method no longer works as of 12/20/2019.

sudo add-apt-repository ppa:jonathonf/python-2.7
sudo apt-get update
sudo apt-get install python2.7

Then, you'll need to cleanup a few leftover system packages manually before installing the newest version of python-pip.  If you don't do this, you'll run into problems installing some new packages using pip.

sudo rm /usr/lib/python2.7/dist-packages/chardet*.egg-info
sudo rm -r /usr/lib/python2.7/dist-packages/chardet
sudo rm /usr/lib/python2.7/lib-dynload/_hashlib.x86_64-linux-gnu.so
sudo rm /usr/lib/python2.7/lib-dynload/_hashlib.i386-linux-gnu.so

Now, you can download and install the newest version of python-pip:

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
sudo python get-pip.py

Getting Let's Encrypt Certbot to Work:

First, you'll need to install a few packages that Certbot (the Let's Encrypt client) uses:

sudo pip install requests
sudo pip install hmac

Now, you'll need to delete the EFF directory from the /opt directory to avoid old configuration issues that were used for your older version of python.  Once you cleanup this directory, you'll run certbot again so it can reconfigure itself. 

sudo rm -r /opt/eff.org/
sudo certbot

You're done.

Full list of commands (for quickly doing all of the above):

sudo -i
add-apt-repository ppa:jonathonf/python-2.7
apt-get update
apt-get install python2.7
rm /usr/lib/python2.7/dist-packages/chardet*.egg-info
rm -r /usr/lib/python2.7/dist-packages/chardet
rm /usr/lib/python2.7/lib-dynload/_hashlib.x86_64-linux-gnu.so
rm /usr/lib/python2.7/lib-dynload/_hashlib.i386-linux-gnu.so
mkdir -p /root/Downloads
cd /root/Downloads
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install requests
pip install hmac
rm -r /opt/eff.org/
certbot

Change the Default Editor to nano in Linux

Saturday, April 27th, 2019

Use nano as the Default Editor

If you hate vi like I do, you can configure Linux to always default to using the nano editor.

Simply add the following to the bottom of the /etc/bashrc file:

export EDITOR="nano"

Save the file.  nano is now the default editor.  When you use

sudo crontab -e

The nano editor will now be used by default.

Copying LVM Containers from One Remote Server to Another

Saturday, April 27th, 2019

Transferring LVM Containers

Before you transfer a KVM container to another machine, create a KVM virtual machine on the target server with the same or larger disk size than the container being transferred. 

You can see a full list of LVM containers by using the below command:

sudo lvdisplay

Copying an LVM Container from the Local Machine to a Remote Server

sudo -i
dd if=/dev/vms/phpdev bs=4096 | pv | ssh root@IPADDRESS_HERE -p SSH_PORT 'dd of=/dev/pool/phpdev bs=4096'

Adjust the above pool paths as necessary since this may vary from server to server. 

Copying an LVM Container from a Remote Machine to the Local Machine

sudo -i
ssh root@IPADDRESS_HERE -p SSH_PORT "dd if=/dev/vms/phpdev bs=4096" | dd of="/dev/vms/phpdev" bs="4096"

Adjust the above pool paths as necessary since this may vary from server to server. 

With SSH Passphrase Key

If you're using an SSH key that is protected with a passphrase, use the below commands to open the key, provide the passphrase for that key, and copy the containers without being prompted for the passphrase when the container transfer begins:

sudo -i
eval $(ssh-agent)
ssh-add /root/keys/{PATH_TO_KEY}
dd if=/dev/pool/test bs=4096 | pv | ssh root@host.com -p {PORT} -i /root/keys/{PATH_TO_KEY} 'dd of=/dev/haha/test bs=4096'