Enable Browser Caching Directives in Amazon Lightsail Apache Server

By default in an Amazon Lightsail LAMP image, the Apache HTTP server is not configured with browser caching directives enabled. Specifically, the mod_expires Apache module is not loaded. If you’re not familiar with browser caching directives, this module controls the Expires HTTP header and the max-age directive of the Cache-Control HTTP header in server responses. These inform the browser how long to cache responses. If a resource is mostly static and doesn’t change very often, then the directive should establish a longer caching period vs. a resource that may be changing hourly. This directive reduces bandwidth consumption and server load while improving user experience by loading screens faster since the browser isn’t repeatedly requesting the same file.

Log into the Amazon Lightsail server and edit the httpd.conf file. In this example, the httpd.conf file is located in /opt/bitnami/apache/conf/ so you may need to adjust the path depending on your image.

sudo vim /opt/bitnami/apache/conf/httpd.conf

Find the mod_expires.so entry in the file, remove the comment (#), and save the file. The server does not need to be restarted.

LoadModule expires_module modules/mod_expires.so

In the .htaccess file for your site, you can add various Expires directives for relevant file types. This example provides a small sample of common file types and can be expanded to meet your particular needs. Please adjust the expiry times as appropriate for your site. A more detailed walkthrough of .htaccess files can be found at Hardening WordPress Security Using .htaccess.

<IfModule mod_expires.c>
  ExpiresActive On
  ExpiresDefault "access plus 1 month"
  ExpiresByType text/html "access plus 0 seconds"
  ExpiresByType image/gif "access plus 1 year"
  ExpiresByType image/png "access plus 1 year"
  ExpiresByType image/jpg "access plus 1 year"
  ExpiresByType image/jpeg "access plus 1 year"
  ExpiresByType text/css "access plus 1 month"
  <IfModule mod_headers.c>
    Header append Cache-Control "public"
  </IfModule>
</IfModule>

As a demonstration of the directive in action, PNG image files are set to a one year cache from the time the file is accessed. In the following image, a file accessed on June 24, 2021 (date) expires in the cache one year later on June 24, 2022 (expires). This information can be viewed in most browsers using the Inspect feature and activating the Network Monitor.

Response Headers: cache-control
Response Headers: cache-control

The browser also shows that the file was transferred from the cache and no additional bandwidth was consumed for that resource.

File Cache Transfer Status
File Cache Transfer Status

Bash Script to Automatically Backup Amazon Lightsail WordPress Server to Google Drive

The intent of this Bash script is to backup the WordPress installation directory, the MySQL database, and a handful of Apache/PHP/MySQL configuration files to Google Drive on a regular basis in an encrypted archive. While this script references paths/files specific to a Bitnami based Amazon Lightsail instance, it can be easily modified to any other Linux based environment. I fully understand that AWS has an automatic snapshot/backup capability, but I want to use Google Drive and I don’t want to incur any additional costs since AWS snapshots are not free.

Usage

If no options are specified when executing this script it will backup all MySQL databases accessible to the user specified in the .mysqldump.cnf file. No other files will be included in the backup. All files are packaged in a tar.gz file and encrypted using AES-256.

  • -f (full backup) includes both the MySQL dump as well as other files specified in the script.
  • -e is used to encrypt the backup archive. If this flag is set, then a password must be supplied in the .backup.cnf file.
  • -n x sets the number of files to retain on Google Drive in the upload directory (x is an integer value). The default is 30 files.
  • -d sets the parent folder where the backup file is uploaded on Google Drive. If the folder doesn’t exist, then the script will create it. The default folder name is “backup”.
  • -p sets the filename prefix of the backup file. The script will use the prefix followed by the date/time to set the full filename. The default filename prefix is “backup”.
  • -m is used to specify the database name to backup. The default is all databases.

Examples

This example uses all of the available options and will create a full encrypted backup of both MySQL and other files. The backup file is named backup-example-com (with a date/time suffix) and uploaded it to a folder named backup-full-example-com on Google Drive. If the specified folder contains more than 5 files, the script deletes any files beyond the 5 based on a filename sort.

./backup.sh -e -f -n 5 -d backup-full-example-com -p backup-example-com -m db_example_com_wp_prd

Installation

As a personal preference, I store user scripts and associated configuration files in a subdirectory under home named “scripts”.

The following command creates the “scripts” subdirectory.

mkdir -m 700 ~/scripts

Next, create a file named .mysqldump.cnf. This file will be used by the mysqldump command in the backup script to pass the MySQL database user and password to dump the WordPress database.

touch ~/scripts/.mysqldump.cnf
chmod 600 ~/scripts/.mysqldump.cnf
vim ~/scripts/.mysqldump.cnf

In the vim editor, add the database username and password with access to dump all of the required tables and data for a backup. Apostrophes surrounding the password are needed if any special characters are used in the actual password.

[mysqldump]
user=username
password='password'

The following commands create the .backup.cnf file which contains a secret key used by the script to encrypt the backup file.

touch ~/scripts/.backup.cnf
chmod 600 ~/scripts/.backup.cnf
vim ~/scripts/.backup.cnf

In the vim editor, add a secret key/password to the file.

pass=passsword

Now we’ll create the actual backup script. Please be sure to modify any referenced paths and constants to your own environment.

touch ~/scripts/backup.sh
chmod 700 ~/scripts/backup.sh
vim ~/scripts/backup.sh

Source Code

The following is the full Bash script. Please modify any constants/paths for your own environment and requirements.

#!/bin/bash

getConfigVal() {
  echo `grep ${1} "${2}" | cut -d '=' -f 2`
}

dirname_temp_backup="automated_backup"
filename_prefix_backup="backup"
filename_suffix_backup="_$(date +'%Y_%m_%d_%H_%M')"
db_name="--all-databases"
gdrive_backup_dirname="backup"
num_files_to_retain=30
path_script="$(dirname "$(readlink -f "$0")")"
path_mysqldump="$(which mysqldump)"
path_gdrive="$(which drive)"
path_backup_cnf="${path_script}/.backup.cnf"
path_mysqldump_cnf="${path_script}/.mysqldump.cnf"
path_backup="${HOME}/${dirname_temp_backup}"

if [ -z "${path_script}" ] || [ -z "${path_mysqldump}" ] || [ -z "${path_gdrive}" ] || [ -z "${path_backup_cnf}" ] || [ -z "${path_mysqldump_cnf}" ] || [ -z "${path_backup}" ]
then
  echo "ERROR: One or more required path variable(s) are undefined or missing."
  exit 1
fi

while getopts efn:d:p:m: flag
do
  case "${flag}" in
    e)
      bool_encrypt_backup=1
      encryption_key=$(getConfigVal "pass" "${path_backup_cnf}")

      if [ -z "${encryption_key}" ]
      then
        echo "ERROR: Encryption key not found."
        exit 1
      fi
      ;;
    f)
      bool_full_backup=1
      ;;
    n)
      if ! [[ "${OPTARG}" =~ ^[0-9]+$ ]]
      then
        echo "WARNING: Number of backups to retain must be an integer. Reverting to default value (${num_files_to_retain})."
      else
        num_files_to_retain="${OPTARG}"
      fi
      ;;
    d)
      gdrive_backup_dirname="${OPTARG}"
      ;;
    p)
      filename_prefix_backup="${OPTARG}"
      ;;
    m)
      db_name="--databases ${OPTARG}"
      ;;
  esac
done

num_files_to_retain=$((num_files_to_retain+1))

sudo rm -rf "${path_backup}"
sudo rm "${HOME}/${filename_prefix_backup}"*.tar.gz

mkdir "${path_backup}"

if [ -n "${bool_full_backup}" ]
then
  find /www -type d -not -path '*/cache/*' -exec mkdir -p "${path_backup}"/{} \;
  find /www -type f -not -path '*/cache/*' -exec sudo cp '{}' "${path_backup}"/{} \;
  cp /opt/bitnami/apache2/conf/vhosts/example-com-https-vhost.conf "${path_backup}"
  cp /opt/bitnami/apache2/conf/vhosts/example-com-vhost.conf "${path_backup}"
  cp /opt/bitnami/apache/conf/httpd.conf "${path_backup}"
  cp /opt/bitnami/apache/conf/bitnami/bitnami-ssl.conf "${path_backup}"
  cp /opt/bitnami/php/etc/php.ini "${path_backup}"
  cp /opt/bitnami/mysql/conf/my.cnf "${path_backup}"
fi

"${path_mysqldump}" --defaults-extra-file="${path_mysqldump_cnf}" ${db_name} --no-tablespaces -ce > "${path_backup}/mysqldump.sql"

touch "${HOME}/${filename_prefix_backup}.tar.gz"

chmod 600 "${HOME}/${filename_prefix_backup}.tar.gz"

sudo tar -czf "${HOME}/${filename_prefix_backup}.tar.gz" -C "${path_backup}" .

if [ -n "${bool_encrypt_backup}" ]
then
  touch "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  chmod 600 "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  openssl enc -e -a -md sha512 -pbkdf2 -iter 100000 -salt -AES-256-CBC -pass "pass:${encryption_key}" -in "${HOME}/${filename_prefix_backup}.tar.gz" -out "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  sudo rm "${HOME}/${filename_prefix_backup}.tar.gz"
  mv "${HOME}/${filename_prefix_backup}_enc.tar.gz" "${HOME}/${filename_prefix_backup}.tar.gz"
fi

mv "${HOME}/${filename_prefix_backup}.tar.gz" "${HOME}/${filename_prefix_backup}${filename_suffix_backup}.tar.gz"

folder_id=`"${path_gdrive}" list -m 1000 -n -q 'trashed = false' | grep -m 1 "${gdrive_backup_dirname}" | head -1 | cut -d ' ' -f1`

if [ -z "${folder_id}" ]
then
  "${path_gdrive}" folder -t "${gdrive_backup_dirname}"
  folder_id=`"${path_gdrive}" list -m 1000 -n -q 'trashed = false' | grep -m 1 "${gdrive_backup_dirname}" | head -1 | cut -d ' ' -f1`
fi

"${path_gdrive}" upload --file "${HOME}/${filename_prefix_backup}${filename_suffix_backup}.tar.gz" --parent "${folder_id}"

sudo rm -rf "${path_backup}"
sudo rm "${HOME}/${filename_prefix_backup}"*.tar.gz

expired_file_ids=`"${path_gdrive}" list -m 1000 -n -q "'${folder_id}' in parents and trashed = false and mimeType != 'application/vnd.google-apps.folder'" | sort -k 2r | tail -n +"${num_files_to_retain}" | cut -d ' ' -f1`

if [ -n "${expired_file_ids}" ]
then
  while read -r file_id; do
    "${path_gdrive}" delete --id "${file_id}"
  done <<< "${expired_file_ids}"
fi

Google Drive

In order for the script to communicate with Google Drive, it has a dependency on a freely available binary on GitHub. The following commands download the binary to a new subdirectory under home named “bin”.

cd ~
mkdir -m 700 ~/bin
cd ~/bin
wget -O drive https://drive.google.com/uc?id=0B3X9GlR6Embnb095MGxEYmJhY2c
chmod 500 ~/bin/drive

Execute the binary once it is downloaded and follow the directions to grant it permission to Google Drive.

~/bin/drive

Go to the following link in your browser:

Enter verification code:

Scheduling

Now that the script and supporting files are set up, the following commands are used to schedule the script in the user crontab. The following example schedules the script to run a full encrypted backup once per month on the 1st of the month at 1 AM and a database only encrypted backup runs every day at 2 AM. The full backup is uploaded to a Google Drive folder named “backup_full-example-com” and the five most recent backups are retained. The database only backup is uploaded to a Google Drive folder named “backup_db_only-example-com” and the thirty most recent backups are retained. Since the script depends on other binaries, e.g. mysqldump and drive, please make sure cron has access to the appropriate user path. If it doesn’t have access to the appropriate path, then add it to the crontab. Use the command env $PATH todisplay the current user path and replace the example below with the appropriate path for the environment.

env $PATH

crontab -e

PATH=$PATH:/home/bitnami/bin:/opt/bitnami/mysql/bin

0 1 * * 1 /home/bitnami/scripts/backup.sh -e -f -n 5 -d backup_full-example-com
0 2 * * * /home/bitnami/scripts/backup.sh -e -n 30 -d backup_db_only-example-com

crontab -l

Decrypting

Let’s assume everything is executing as expected and Google Drive is populated with backups from the script. Download a backup from Google Drive to your local machine. Since the file is encrypted, you won’t be able to open it directly. In order to decrypt the file, OpenSSL is used with the -d flag and the same options that were used to encrypt it. Specify the filename of the downloaded encrypted file with the -in option and the filename of resulting decrypted file in the -out option. When prompted, please be sure to enter the same password used by the script to encrypt the file (found in the .backup.cnf file). If you are downloading/decrypting on a Windows machine, you will need to install OpenSSL.

openssl enc -d -a -md sha512 -pbkdf2 -iter 100000 -salt -AES-256-CBC -in "c:\backup_full-example-com_2021_03_23_01_00.tar.gz" -out "c:\decrypted_backup_full-example-com_2021_03_23_01_00.tar.gz"

The resulting decrypted file is now readable as a regular tar.gz file.

Automate Amazon Lightsail Maintenance Activities Using Bash Script

With Amazon Lightsail, like most other virtual private servers (VPS), you are responsible for performing server maintenance activities, e.g. applying patches, once the instance is running. While it’s easy to perform this manually based on a calendar reminder, it’s also easy to forget to do it periodically. When added to the user crontab, the following bash script automatically performs OS patches on a schedule that you define. Please be aware that most Lightsail images are Bitnami based and this method will not apply upgrades/patches on its packaged applications. Updating Apache, for example, requires moving to a new instance with the latest Bitnami image.

Since this example assumes a Lightsail environment based on a Bitnami stack (Debian), user and path specifics may need change for your specific environment.

As a personal preference, I write user scripts to a “scripts” directory in my home directory. This first step creates a new “scripts” directory in my home directory with 700 permissions (read, write, execute). Then, I create a blank file named “maintenance.sh” to contain the script. I prefer to use vim to edit files, but please feel free to adjust to your preferred editor.

mkdir -m 700 ~/scripts

touch ~/scripts/maintenance.sh
chmod 700 ~/scripts/maintenance.sh
vim ~/scripts/maintenance.sh

The next step is to add the script to the blank maintenance.sh file. The script performs only a few actions: “apt update” retrieves the latest list of available packages and versions, but it does not upgrade any packages; “apt upgrade” upgrades existing/installed packages to the latest version. Finally, “apt autoclean” removes package files from the local repository that have been uninstalled or they are no longer available for downloaded.

The script also deletes older log files in the Apache, MySQL, and PHP directories. The rm commands remove all log files older than the latest 5 based on the date based file naming conventions.

#!/bin/bash

sudo apt update && sudo apt upgrade -y
sudo apt autoclean

sudo rm -f $( find /opt/bitnami/apache/logs -maxdepth 1 -iname '*access_log-*' -type f | sort -r | tail -n +6 )
sudo rm -f $( find /opt/bitnami/apache/logs -maxdepth 1 -iname '*error_log-*' -type f | sort -r | tail -n +6 )
sudo rm -f $( find /opt/bitnami/mysql/logs -maxdepth 1 -iname '*mysqld.log-*' -type f | sort -r | tail -n +6 )
sudo rm -f $( find /opt/bitnami/php/logs -maxdepth 1 -iname '*php-fpm.log-*' -type f | sort -r | tail -n +6 )

The last step is to schedule the job using cron. The next command allows you to edit the user crontab.

crontab -e

As an example, add the following line to the user crontab to schedule the script to run at 3:00 AM every Sunday.

0 3 * * 0 /home/bitnami/scripts/maintenance.sh

Exit the editor and the your maintenance activities will be performed automatically as scheduled.