Running and Debugging Specific WordPress Cron Hooks Manually Using WP-CLI

Running and Debugging Specific WordPress Cron Hooks Manually Using WP-CLI

If your WordPress instance is configured with debugging turned off (WP_DEBUG set to false in wp-config.php) and you need to troubleshoot, debug, or run a specific WordPress cron hook or function manually, you'll find that it's pretty tough to do so.  Fortunately, I found a relatively easy way to do exactly this using WP-CLI.

Install WP-CLI

Install WP-CLI by using the below commands as root:

sudo -i
curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
sudo mv wp-cli.phar /usr/local/bin/wp

Now, change into your WordPress instance vhosts directory:

cd /var/www/vhosts/{YOUR_DOMAIN}/httpdocs

Update the above path as necessary.

To run all the scheduled WordPress cron events, use this command:

wp cron event run --all --allow-root

To run a specific WordPress cron hook, use this command:

wp cron event run "NAME_OF_CRON_HOOK" --allow-root

If you need help determining the name of the cron job hook you specifically want to run or debug, install the WP Control plugin in your WordPress instance. Then visit the /wp-admin/tools.php?page=crontrol_admin_manage_page page to see a list of cron hooks and when they run.  Grab the name from the "Hook" column and update your wp cli command to run that cron event!

You can add echo statements and other code to your individual cron function plugins to better debug and troubleshoot them.

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

OpenVPN Expired CRL – VPN Won’t Connect

OpenVPN Expired CRL – VPN Won't Connect

Recently, I ran into an issue where OpenVPN was no longer working for existing clients.  After looking at the OpenVPN log in /var/log/openvpn.log, I found the following:

VERIFY ERROR: depth=0, error=CRL has expired:

If you see an OpenVPN error about an expired certificate revocation list (CRL), here's how to generate a new CRL:

cd /etc/openvpn/easy-rsa
EASYRSA_CRL_DAYS=3650 ./easyrsa gen-crl
cp /etc/openvpn/easy-rsa/pki/crl.pem /etc/openvpn/crl.pem
chown nobody:nogroup /etc/openvpn/crl.pem
service openvpn restart

Problem solved!

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

Support Older TLS Versions in Newer Ubuntu / Debian OS Versions

Support Older TLS Versions in Newer Ubuntu / Debian OS Versions

Edit openssl.conf file:

sudo nano /etc/ssl/openssl.cnf

Add this line at the top:

openssl_conf = openssl_init

And add these lines at the end:

[openssl_init]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
CipherString = DEFAULT@SECLEVEL=1

https://askubuntu.com/questions/1233186/ubuntu-20-04-how-to-set-lower-ssl-security-level#answer-1296578

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

cURL and wget Issues on Ubuntu 16.04 – SSL: TLSV1_ALERT_PROTOCOL_VERSION

cURL and wget Issues on Ubuntu 16.04

When using wget or curl to make HTTP requests from a no longer supported installation of Ubuntu 16.04 Xenial, if you get any of the following errors:

curl gnutls_handshake() failed: Error in protocol version
curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version  /home/mohan/mesg
[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:727) 

The solution is to add SavOS Rob Savoury PPAs to get updated curl and wget packages:

sudo add-apt-repository ppa:savoury1/build-tools
sudo add-apt-repository ppa:savoury1/backports
sudo add-apt-repository ppa:savoury1/python
sudo add-apt-repository ppa:savoury1/encryption
sudo add-apt-repository ppa:savoury1/curl34
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install wget curl python2.7

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

ASP.NET CORE – Smart Way to Prevent Cross-Site Request Forgery (CSRF) Attempts – Protect AJAX XHR Requests

ASP.NET CORE MVC – Protect AJAX Requests from CSRF Attempts

This is a follow-up post related to https://blog.eamster.tk/asp-net-mvc-smart-way-to-prevent-cross-site-request-forgery-csrf-attempts-webapi-ajax-xhr-and-normal-post-operations/

I've modified the code from the linked post above so that it works with ASP.NET CORE.  Below is the code that can protect all AJAX requests from CSRF (Cross-Site Request Forgery) attempts.  For normal <form> POST requests, you should still use and validate against a CSRF token, but if your application is separated into multiple pieces (for example a node.js React front-end application and a .NET CORE based API), this is an easy way to help prevent CSRF attacks without the use of tokens.

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

namespace AnalyticsAPI.Filters
{
    public class CSRFAjaxRequestMitigation : IAuthorizationFilter
    {
        public void OnAuthorization(AuthorizationFilterContext filterContext)
        {
            IServiceProvider services = filterContext.HttpContext.RequestServices;
            IConfiguration Configuration = services.GetService<IConfiguration>();

            string validOrigins = Configuration.GetValue<string>("AllowedEnvironments"); // Example in appsettings.json "AllowedEnvironments": "https://testurl.com:4443,https://testurl.com,https://testurl2.com", 
            bool skipCheck = false;

            if(Configuration.GetValue<string>("ENVIRONMENT") == "LOCAL")
            {
                skipCheck = true;
            }

            // In AJAX requests, the origin header is always sent (UNLESS IT'S COMING FROM THE SAME ORIGIN), so we can validate that it comes from a trusted location to prevent CSRF attacks - but if one isn't sent, we won't do anything (assume trusted)
            // In which case, we don't need to do any token checking either
            if (!skipCheck && !string.IsNullOrEmpty(validOrigins))
            {
                List<string> validOriginURLs = validOrigins.Split(',').ToList();
                if (!string.IsNullOrEmpty(filterContext.HttpContext.Request.Headers["Origin"].ToString()))
                {
                    string origin = filterContext.HttpContext.Request.Headers["Origin"];
                    if (!validOriginURLs.Contains(origin, StringComparer.OrdinalIgnoreCase))
                    {
                        filterContext.Result = new UnauthorizedResult();
                    }
                }
            }
        }
    }

    public class CSRFMitigationAttribute : TypeFilterAttribute
    {
        public CSRFMitigationAttribute()
            : base(typeof(CSRFAjaxRequestMitigation))
        {
            Arguments = new object[] {};
        }
    }
} 

 

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

Ubuntu: Allow Automatic Updates for Specific Packages Only

Ubuntu: Allow Automatic Updates for Specific Packages Only

If you want to allow Google products and packages to update automatically, follow this guide.

You can also add additional sources that should update automatically following the same process.

This is helpful when using Selenium, WebDriver for Chrome, and Python.  Doing this allows you to always use the most up-to-date version of all of these dependent packages.

Tested in Ubuntu 20.04

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

MongoDB BSON Restore, Converting to JSON, and More MongoDB Helpful Commands

MongoDB Helpful Scripts & Commands

Restoring BSON Backups

The below batch script helps you extract all .gz zipped BSON MongoDB table backup files and then restore these tables to a particular Mongo database easily:

@ECHO ON
SET SourceDir=%~dp0
cd %SourceDir%
mkdir "extracted"
mkdir "extracted\json"
FOR /R %SourceDir% %%A IN ("*.gz") DO "C:\Program Files\7-Zip\7z.exe" x "%%~A" -o"%SourceDir%\extracted"
mongorestore -d {DATABASE_NAME_TO_RESTORE_TABLES_INTO} --host localhost:27017 "extracted"

Converting MongoDB Tables and Data to Proper JSON

If you want to convert MongoDB tables and their data into JSON, you can use the below commands:

mongoexport -d {DATABASE_TO_EXPORT_FROM} --host localhost:27017 -c {TABLE_NAME_TO_EXPORT_CONVERT_INTO_JSON} --jsonArray --pretty -o "%SourceDir%\extracted\json\{TABLE_NAME_BEING_CONVERTED_TO_JSON_NAME}.json"

With older versions of MongoDB, the json file export doesn't actually contain valid JSON. To fix the $date and $numberLong properties which are invalid according to JSON specifications, you can run the below Python 2.7 script:

######################################
# About                              #
######################################

# Author:   Eric Arnol-Martin https://eamster.tk
# Purpose:  Replaces Mongo's Exported DateTime Format with Proper DateTime String Representations for Easier Import for Other Databases / Programming Languages
# Expects:  Mongo JSON Exported Pretty File.  
#			For example, a file produced by a command similar to 
#			"mongoexport -d {db_name} --host localhost:27017 -c {table_name} --jsonArray --pretty -o {table_name}.json"
# Tests:    RegEx Test Link:  https://regexr.com/68l7r
# Outputs:  Creates a copy of the input JSON file with DateTime objects replaced with their proper string representation in the same directory as the original file 
#			with the same file name suffixed with "_NEW" at the end of it.
# Sources:  https://stackoverflow.com/questions/2503413/regular-expression-to-stop-at-first-match
#			https://stackoverflow.com/questions/159118/how-do-i-match-any-character-across-multiple-lines-in-a-regular-expression

######################################
# Imports                            #
######################################

import re
from os.path import exists
import fileinput


######################################
# Actual Program                     #
######################################

path = input ("Enter path or name of file to parse: ")
prevPiece = None
content_new = ""
boolReplaceFromNextLine = False
boolHandlingLong = False

if exists(path):
	# Clear new file
	b = open(path + "_NEW", "w+")
	b.close()
	count = 0
	recordCount = 0

	for line in fileinput.input(files=path):
		count = count + 1
		if '"$date":' in line: 
			content_new = content_new[0:content_new.rindex('{')] + line.replace('"$date": {', '').replace('"$date":', '').replace('\n', '').strip();
			boolReplaceFromNextLine = True
		else:
			if boolReplaceFromNextLine:
				if '"$numberLong":' in line:
					content_new = content_new + line.replace('"$numberLong":', '').replace('\n', '').strip();
					boolReplaceFromNextLine = True
					boolHandlingLong = True
				else:
					if boolHandlingLong:
						content_new = content_new + line.replace('}', '').strip();
						boolReplaceFromNextLine = True
						boolHandlingLong = False
					else:
						boolReplaceFromNextLine = False
						content_new = content_new + line.replace('}', '').strip() + '\n';
			else:
				content_new = content_new + line		
		
		if line == '},\n' and content_new:
			recordCount = recordCount + 1
			b = open(path + "_NEW", "a+")
			b.write(content_new)
			b.close()	
			content_new = ""
			print('Record ' + str(recordCount) + ' processed... adding it to the file...')
			
		prevPiece = line
	
	if content_new:
		b = open(path + "_NEW", "a+")
		b.write(content_new)
		b.close()	
		content_new = ""
		recordCount = recordCount + 1
		print('Record ' + str(recordCount) + ' processed... adding it to the file...')

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

Rebuilding a Removed / Failed RAID 10 Array in CentOS / Rocky Linux

Replace Hard Drive in a RAID 10 Array and Sync the RAID 10 Array to the New Hard Drive

I had the hardest time rebuilding a RAID 10 array after replacing a hard drive.  I didn't fail the old hard drive before removing it from the array, and sometimes, this may not be an option.  What happened in my case is the data center replaced the hard drive that I had shipped to them directly from an eBay seller.  I was hoping that the RAID array would rebuild itself onto the new drive (as I have seen happen before in some circumstances).  However, that may not happen if the replacement drive still has its old RAID array or partition information present, and then, it might be difficult to actually get the RAID array to sync to the new drive. 

In my case, I run LVM (Logical Volume Manager) for my partitions.  This complicates the RAID setup, and I found that mdadm commands didn't work as expected.  If this situation occurs, it is best to boot Rocky Linux or CentOS in recovery mode using a Rocky Linux ISO or CentOS ISO.  Once the recovery system loads, drop to a shell without mounting any file systems.  Next, you will need to deactivate your LVM volume group:

vgdisplay
vgchange -a n my_volume_group # deactivate

Next, examine your md RAID array by running the following command:

cat /proc/mdstat

After running that command, I identied my RAID devices as md126 and md127.  /dev/md127 is considered the parent even though /dev/md126 is where everything is. 

I can get more information about the RAID array by running the below commands:

mdadm --detail /dev/md126
mdadm --detail /dev/md127

Let's fail and remove any removed (no longer existing) drives using this command:

mdadm /dev/md126 --remove failed
mdadm /dev/md126 --remove detached
mdadm /dev/md127 --remove failed
mdadm /dev/md127 --remove detached

Next, we need to identify the hard drive we want to add / replace the removed drive in the array:

lsblk

From running the above command, I noticed that the new drive was /dev/sde, so I needed to wipe its old RAID configuration (if there is any) and then add it to the RAID array.

wipefs /dev/sde
mdadm --add /dev/md127 /dev/sde

Check to see if the syncing process has started:

cat /proc/mdstat

You may or may not need to run the below command to get the RAID device to start syncing to the new drive:

mdadm --grow /dev/md126 --raid-devices=4

Helpful Links:

https://delightlylinux.wordpress.com/2020/12/22/how-to-remove-a-drive-from-a-raid-array/
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1
https://unix.stackexchange.com/questions/53129/dev-md127-refuses-to-stop-no-open-files
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vg_activate
https://serverfault.com/questions/676638/mdadm-drive-replacement-shows-up-as-spare-and-refuses-to-sync
https://serverfault.com/questions/554553/how-to-delete-removed-devices-from-a-mdadm-raid1

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Setup Remote Logging on an Ubuntu rsyslog Server for DD-WRT to Use

Enable remote logging on an Ubuntu server by configuring rsyslog to allow remote connections from port 514 (adjust as needed):

sudo nano /etc/rsyslog.conf

Uncomment the imudp and imtcp load module statements like so (adjusting as needed):

# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")

# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")

Create a logging template and apply it only to remote hosts that start with "c-" (comcast connection remote host prefix [followed by the IP address of the device which can change])

# Comcast remote logging
$template remote-incoming-logs, "/var/log/remote_logs/%HOSTNAME%/%PROGRAMNAME%.$
if $fromhost startswith "c-" then -?remote-incoming-logs

Save and quit.

Restart the rsyslog daemon:

sudo service rsyslog restart

Remote logs will be stored in /var/log/remote_logs

Configure logrotate to process and rotate these logs automatically (so you don't lose them and have a history on them):

sudo nano /etc/logrotate.d/ddwrt

Paste these contents into the file:

/var/log/remote_logs/*.log /var/log/remote_logs/*/*.log {
    daily
    missingok
    compress
    delaycompress
    su syslog adm
}

Save and quit.

Everything has been configured, and remote logging should work from your DD-WRT router once you set the remote URL to your server's IPAddress:port combo and apply the changed settings.

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)

CentOS LVM and Software RAID Partitioning Instructions

Installing and Configuring CentOS to Host KVM Virtual Machines

GUI

When configuring a fresh install of CentOS for a KVM host machine (the main server that hosts all of the virtual machines), I like to run a GUI to make managing some of the virtual machines easier.  Thus, during install, choose the options for CentOS with Minimal GUI:

RAID 10 LVM Partitions

When configuring the hard drive partitions, set it up to use RAID 10 LVM SOFTWARE RAID:

Create volume group called "vms" without the quotes that is setup as RAID 10 (set volume group space to be as large as possible).

Set the "/" partition to 100GB XFS LVM (RAID10).

Set the "swap" partition to 32GB.

Only setup those two partitions.  The remaining space in the RAID 10 volume group "vms" will be used for KVM containers (and the remaining space does NOT need to be assigned to any mount points).

That's all.

Rent dedicated game servers from Chicago, Kansas City, Dallas Texas, Wilkes-Barre Pennsylvania, Arizona, Denver Colorado, California, Florida, and Sofia Bulgaria starting as low as $7.45 a month. We Be HostiN (https://webehostin.com)