Bash Script to Automatically Backup Amazon Lightsail WordPress Server to Google Drive

The intent of this Bash script is to backup the WordPress installation directory, the MySQL database, and a handful of Apache/PHP/MySQL configuration files to Google Drive on a regular basis in an encrypted archive. While this script references paths/files specific to a Bitnami based Amazon Lightsail instance, it can be easily modified to any other Linux based environment. I fully understand that AWS has an automatic snapshot/backup capability, but I want to use Google Drive and I don’t want to incur any additional costs since AWS snapshots are not free.

Usage

If no options are specified when executing this script it will backup all MySQL databases accessible to the user specified in the .mysqldump.cnf file. No other files will be included in the backup. All files are packaged in a tar.gz file and encrypted using AES-256.

  • -f (full backup) includes both the MySQL dump as well as other files specified in the script.
  • -e is used to encrypt the backup archive. If this flag is set, then a password must be supplied in the .backup.cnf file.
  • -n x sets the number of files to retain on Google Drive in the upload directory (x is an integer value). The default is 30 files.
  • -d sets the parent folder where the backup file is uploaded on Google Drive. If the folder doesn’t exist, then the script will create it. The default folder name is “backup”.
  • -p sets the filename prefix of the backup file. The script will use the prefix followed by the date/time to set the full filename. The default filename prefix is “backup”.
  • -m is used to specify the database name to backup. The default is all databases.

Examples

This example uses all of the available options and will create a full encrypted backup of both MySQL and other files. The backup file is named backup-example-com (with a date/time suffix) and uploaded it to a folder named backup-full-example-com on Google Drive. If the specified folder contains more than 5 files, the script deletes any files beyond the 5 based on a filename sort.

./backup.sh -e -f -n 5 -d backup-full-example-com -p backup-example-com -m db_example_com_wp_prd

Installation

As a personal preference, I store user scripts and associated configuration files in a subdirectory under home named “scripts”.

The following command creates the “scripts” subdirectory.

mkdir -m 700 ~/scripts

Next, create a file named .mysqldump.cnf. This file will be used by the mysqldump command in the backup script to pass the MySQL database user and password to dump the WordPress database.

touch ~/scripts/.mysqldump.cnf
chmod 600 ~/scripts/.mysqldump.cnf
vim ~/scripts/.mysqldump.cnf

In the vim editor, add the database username and password with access to dump all of the required tables and data for a backup. Apostrophes surrounding the password are needed if any special characters are used in the actual password.

[mysqldump]
user=username
password='password'

The following commands create the .backup.cnf file which contains a secret key used by the script to encrypt the backup file.

touch ~/scripts/.backup.cnf
chmod 600 ~/scripts/.backup.cnf
vim ~/scripts/.backup.cnf

In the vim editor, add a secret key/password to the file.

pass=passsword

Now we’ll create the actual backup script. Please be sure to modify any referenced paths and constants to your own environment.

touch ~/scripts/backup.sh
chmod 700 ~/scripts/backup.sh
vim ~/scripts/backup.sh

The following is the full Bash script. Please modify any constants/paths for your own environment and requirements.

#!/bin/bash

getConfigVal() {
  echo `grep ${1} "${2}" | cut -d '=' -f 2`
}

dirname_temp_backup="automated_backup"
filename_prefix_backup="backup"
filename_suffix_backup="_$(date +'%Y_%m_%d_%H_%M')"
db_name="--all-databases"
gdrive_backup_dirname="backup"
num_files_to_retain=30
path_script="$(dirname "$(readlink -f "$0")")"
path_mysqldump="$(which mysqldump)"
path_gdrive="$(which drive)"
path_backup_cnf="${path_script}/.backup.cnf"
path_mysqldump_cnf="${path_script}/.mysqldump.cnf"
path_backup="${HOME}/${dirname_temp_backup}"

if [ -z "${path_script}" ] || [ -z "${path_mysqldump}" ] || [ -z "${path_gdrive}" ] || [ -z "${path_backup_cnf}" ] || [ -z "${path_mysqldump_cnf}" ] || [ -z "${path_backup}" ]
then
  echo "ERROR: One or more required path variable(s) are undefined or missing."
  exit 1
fi

while getopts efn:d:p:m: flag
do
  case "${flag}" in
    e)
      bool_encrypt_backup=1
      encryption_key=$(getConfigVal "pass" "${path_backup_cnf}")

      if [ -z "${encryption_key}" ]
      then
        echo "ERROR: Encryption key not found."
        exit 1
      fi
      ;;
    f)
      bool_full_backup=1
      ;;
    n)
      if ! [[ "${OPTARG}" =~ ^[0-9]+$ ]]
      then
        echo "WARNING: Number of backups to retain must be an integer. Reverting to default value (${num_files_to_retain})."
      else
        num_files_to_retain="${OPTARG}"
      fi
      ;;
    d)
      gdrive_backup_dirname="${OPTARG}"
      ;;
    p)
      filename_prefix_backup="${OPTARG}"
      ;;
    m)
      db_name="--databases ${OPTARG}"
      ;;
  esac
done

num_files_to_retain=$((num_files_to_retain+1))

sudo rm -rf "${path_backup}"
sudo rm "${HOME}/${filename_prefix_backup}"*.tar.gz

mkdir "${path_backup}"

if [ -n "${bool_full_backup}" ]
then
  find /www -type d -not -path '*/cache/*' -exec mkdir -p "${path_backup}"/{} \;
  find /www -type f -not -path '*/cache/*' -exec sudo cp '{}' "${path_backup}"/{} \;
  cp /opt/bitnami/apache2/conf/vhosts/example-com-https-vhost.conf "${path_backup}"
  cp /opt/bitnami/apache2/conf/vhosts/example-com-vhost.conf "${path_backup}"
  cp /opt/bitnami/apache/conf/httpd.conf "${path_backup}"
  cp /opt/bitnami/apache/conf/bitnami/bitnami-ssl.conf "${path_backup}"
  cp /opt/bitnami/php/etc/php.ini "${path_backup}"
  cp /opt/bitnami/mysql/conf/my.cnf "${path_backup}"
fi

"${path_mysqldump}" --defaults-extra-file="${path_mysqldump_cnf}" ${db_name} --no-tablespaces -ce > "${path_backup}/mysqldump.sql"

touch "${HOME}/${filename_prefix_backup}.tar.gz"

chmod 600 "${HOME}/${filename_prefix_backup}.tar.gz"

sudo tar -czf "${HOME}/${filename_prefix_backup}.tar.gz" -C "${path_backup}" .

if [ -n "${bool_encrypt_backup}" ]
then
  touch "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  chmod 600 "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  openssl enc -e -a -md sha512 -pbkdf2 -iter 100000 -salt -AES-256-CBC -pass "pass:${encryption_key}" -in "${HOME}/${filename_prefix_backup}.tar.gz" -out "${HOME}/${filename_prefix_backup}_enc.tar.gz"

  sudo rm "${HOME}/${filename_prefix_backup}.tar.gz"
  mv "${HOME}/${filename_prefix_backup}_enc.tar.gz" "${HOME}/${filename_prefix_backup}.tar.gz"
fi

mv "${HOME}/${filename_prefix_backup}.tar.gz" "${HOME}/${filename_prefix_backup}${filename_suffix_backup}.tar.gz"

folder_id=`"${path_gdrive}" list -m 1000 -n -q 'trashed = false' | grep -m 1 "${gdrive_backup_dirname}" | head -1 | cut -d ' ' -f1`

if [ -z "${folder_id}" ]
then
  "${path_gdrive}" folder -t "${gdrive_backup_dirname}"
  folder_id=`"${path_gdrive}" list -m 1000 -n -q 'trashed = false' | grep -m 1 "${gdrive_backup_dirname}" | head -1 | cut -d ' ' -f1`
fi

"${path_gdrive}" upload --file "${HOME}/${filename_prefix_backup}${filename_suffix_backup}.tar.gz" --parent "${folder_id}"

sudo rm -rf "${path_backup}"
sudo rm "${HOME}/${filename_prefix_backup}"*.tar.gz

expired_file_ids=`"${path_gdrive}" list -m 1000 -n -q "'${folder_id}' in parents and trashed = false and mimeType != 'application/vnd.google-apps.folder'" | sort -k 2r | tail -n +"${num_files_to_retain}" | cut -d ' ' -f1`

if [ -n "${expired_file_ids}" ]
then
  while read -r file_id; do
    "${path_gdrive}" delete --id "${file_id}"
  done <<< "${expired_file_ids}"
fi

In order for the script to communicate with Google Drive, it has a dependency on a freely available binary on GitHub. The following commands download the binary to a new subdirectory under home named “bin”.

cd ~
mkdir -m 700 ~/bin
cd ~/bin
wget -O drive https://drive.google.com/uc?id=0B3X9GlR6Embnb095MGxEYmJhY2c
chmod 500 ~/bin/drive

Execute the binary once it is downloaded and follow the directions to grant it permission to Google Drive.

~/bin/drive

Go to the following link in your browser:

Enter verification code:

Scheduling

Now that the script and supporting files are set up, the following commands are used to schedule the script in the user crontab. The following example schedules the script to run a full encrypted backup once per month on the 1st of the month at 1 AM and a database only encrypted backup runs every day at 2 AM. The full backup is uploaded to a Google Drive folder named “backup_full-example-com” and the five most recent backups are retained. The database only backup is uploaded to a Google Drive folder named “backup_db_only-example-com” and the thirty most recent backups are retained. Since the script depends on other binaries, e.g. mysqldump and drive, please make sure cron has access to the appropriate user path. If it doesn’t have access to the appropriate path, then add it to the crontab. Use the command env $PATH todisplay the current user path and replace the example below with the appropriate path for the environment.

env $PATH

crontab -e

PATH=$PATH:/home/bitnami/bin:/opt/bitnami/mysql/bin

0 1 * * 1 /home/bitnami/scripts/backup.sh -e -f -n 5 -d backup_full-example-com
0 2 * * * /home/bitnami/scripts/backup.sh -e -n 30 -d backup_db_only-example-com

crontab -l

Decrypting

Let’s assume everything is executing as expected and Google Drive is populated with backups from the script. Download a backup from Google Drive to your local machine. Since the file is encrypted, you won’t be able to open it directly. In order to decrypt the file, OpenSSL is used with the -d flag and the same options that were used to encrypt it. Specify the filename of the downloaded encrypted file with the -in option and the filename of resulting decrypted file in the -out option. When prompted, please be sure to enter the same password used by the script to encrypt the file (found in the .backup.cnf file). If you are downloading/decrypting on a Windows machine, you will need to install OpenSSL.

openssl enc -d -a -md sha512 -pbkdf2 -iter 100000 -salt -AES-256-CBC -in "c:\backup_full-example-com_2021_03_23_01_00.tar.gz" -out "c:\decrypted_backup_full-example-com_2021_03_23_01_00.tar.gz"

The resulting decrypted file is now readable as a regular tar.gz file.

Automate Amazon Lightsail Maintenance Activities Using Bash Script

With Amazon Lightsail, like most other virtual private servers (VPS), you are responsible for performing server maintenance activities, e.g. applying patches, once the instance is running. While it’s easy to perform this manually based on a calendar reminder, it’s also easy to forget to do it periodically. When added to the user crontab, the following bash script automatically performs OS patches on a schedule that you define. Please be aware that most Lightsail images are Bitnami based and this method will not apply upgrades/patches on its packaged applications. Updating Apache, for example, requires moving to a new instance with the latest Bitnami image.

Since this example assumes a Lightsail environment based on a Bitnami stack (Debian), user and path specifics may need change for your specific environment.

As a personal preference, I write user scripts to a “scripts” directory in my home directory. This first step creates a new “scripts” directory in my home directory with 700 permissions (read, write, execute). Then, I create a blank file named “maintenance.sh” to contain the script. I prefer to use vim to edit files, but please feel free to adjust to your preferred editor.

mkdir -m 700 ~/scripts

touch ~/scripts/maintenance.sh
chmod 700 ~/scripts/maintenance.sh
vim ~/scripts/maintenance.sh

The next step is to add the script to the blank maintenance.sh file. The script performs only a few actions: “apt update” retrieves the latest list of available packages and versions, but it does not upgrade any packages; “apt upgrade” upgrades existing/installed packages to the latest version. Finally, “apt autoclean” removes package files from the local repository that have been uninstalled or they are no longer available for downloaded.

#!/bin/bash

# https://www.dalesandro.net/automate-amazon-lightsail-maintenance-activities-using-bash-script/

sudo apt update && sudo apt upgrade -y
sudo apt autoclean

The last step is the schedule the job using cron. The next command allows you to edit the user crontab.

crontab -e

As an example, add the following line to the user crontab to schedule the script to run at 3:00 AM every Sunday.

0 3 * * 0 /home/bitnami/scripts/maintenance.sh

Exit the editor and the your maintenance activities will be performed automatically as scheduled.

How to Migrate an Existing WordPress Site to Amazon Lightsail LAMP

When I first started www.dalesandro.net, I found a relatively inexpensive shared hosting provider. The site remained in that environment for years. I never had any significant technical issues with the provider and the cost had remained flat. Keep in mind that this is an extremely low traffic site running a straightforward WordPress installation.

After receiving a renewal notice from the shared hosting provider with a substantial price increase, I decided to research other available options. Around the same time, I obtained the AWS Certified Cloud Practitioner (AWS CCP) certification where I learned of an offering called Amazon Lightsail. At its core, Lightsail allows you to instantiate one or more virtual private servers – a virtual machine, storage, bandwidth, and a static IP – for a (somewhat) predictable monthly price.

Within Amazon Lightsail, there are a number of options to establish an instance, e.g. selecting the operating system, application stack, and server size, etc. While you can select a Linux/Unix instance with WordPress already installed, I chose the LAMP (PHP 7) blueprint because I wanted to migrate an existing WordPress installation on my own and avoid any potential issues with a pre-installed package.

Step 1 – AWS Basics

If you don’t already have an AWS account, the first step is to create an account. Navigate to AWS and follow the on-screen instructions. I won’t walk through this part since it is a straightforward process to create an account. Also, I suggest researching AWS pricing to better understand how actions or selections impact the monthly bill. While Lightsail instances are considered fixed-price per month, if products/services are activated or the bandwidth allotment is exceeded, then the monthly bill will fluctuate. Depending on the cost trigger, the pricing change could be substantial.

Step 2 – Migration Preparation

In my opinion, this is the most difficult and time-consuming part of the process. Since I have an existing site with a shared hosting provider, I need to move the WordPress files as well as the database. The shared host uses cPanel, so there is a capability to download a copy of both the account directory as well as a dump of any MySQL databases. These backups are available in “Account Backups” within cPanel.

In my case, the challenge is that the directory structure on my shared host doesn’t match the structure that I want to use in Lightsail. After downloading both the account directory and MySQL backups, I make copies of the files in order to have a clean backup and a working copy. I then restructure the files manually, i.e. moving directories and changing paths in the database data. There are plugins available for WordPress to handle this search-and-replace activity, however, I am not familiar with them and I am comfortable making the changes myself. Also, since the site is mostly static, I work through the changes manually and at my own pace without worrying much about losing comment submissions in the meantime.

As far as the directory structure changes, I am moving from having WordPress installed within a sub-directory of my account to having WordPress installed in a primary directory. I have to update all sub-directory path references in files such as wp-config.php and .htaccess as well as the WordPress database tables. For my site, the majority of this search-and-replace activity is in the MySQL dump. Multiple references to both the sub-directory and the underlying file path on the shared host are in the options table as well as various plug-in tables. I find many path references included in serialized data within the MySQL dump so I take a risk updating it manually under the assumption it could be re-serialized by simply re-saving the settings once the site was up-and-running.

Take time and extra care as you work through this process.

At the end of this manual process, I create a new archive containing all of the WordPress files called wordpress_migration.tar.gz. Consolidating all of the necessary files makes the upload in Step 11 easier/faster. The archive is structured so that all of the WordPress files are under a parent directory named migration/public_html. I store the revised MySQL dump file named wrdprss.sql from the existing site into the migration directory (not the migration/public_html) in the archive.

wordpress_migration.tar.gz

migration/
 ├── wrdprss.sql  <-- MySQL dump file
 └── public_html/
     ├── index.php
     ├── license.txt
     ├── readme.html
     ├── wp-activate.php
     ├── wp-admin/
     ├── wp-blog-header.php
     ├── wp-comments-post.php
     ├── wp-config.php
     ├── wp-content/
     ├── wp-cron.php
     ├── wp-includes/    
     ├── wp-links-opml.php
     ├── wp-load.php
     ├── wp-login.php
     ├── wp-mail.php
     ├── wp-settings.php
     ├── wp-signup.php
     ├── wp-trackback.php
     └── xmlrpc.php

Step 3 – Testing the Migration Files

For some of the available blueprints, Amazon Lightsail uses stack installers/images developed by Bitnami. I download the Bitnami LAMP Virtual Machine (VM) and run it locally using VirtualBox. This allows me to test my migration steps as well as my migration archive/files before incurring costs on AWS. Follow Steps 7 to Step 15 as a local test prior to performing the migration on the production Lightsail instance.

Step 4 – Create an Amazon Lightsail Instance

Login to the AWS Management Console and search for Lightsail. Follow the on-screen instructions to create a new Lightsail instance. I instantiate a Lightsail instance using the LAMP blueprint (Linux / Apache / MySQL / PHP).

Step 5 – Create and Attach a Static IP

Follow these instructions to create a static IP and attach it to the Lightsail instance.

Step 6 – Use SSH to Connect to the Instance

Once the Lightsail instance running, I need to connect to it in order to start performing the migration. A connection to the instance can be made directly through the browser using the “Connect to your instance” option (follow these instructions). The other option is to use PuTTY and a private key. I prefer to use PuTTY since it feels more stable than the browser based connection. Follow these instructions to make a PuTTY-based connection to your instance.

Step 7 – Update and Upgrade the Instance

At the command prompt, I execute the following command to update and upgrade the instance packages/software. This is a maintenance step that should be performed regularly as this will apply any available patches. With Lightsail, you are responsible for the ongoing maintenance activities associated with your server.

sudo apt update && sudo apt upgrade

Step 8 – Create a New DocumentRoot Directory

The DocumentRoot directory is where the WordPress application files will eventually be stored and served. While the Bitnami installation of Apache defaults to /opt/bitnami/apache/htdocs, I prefer to create a separate directory structure outside of the server path. You may use the default path or change this folder structure to match your own requirements.

The mkdir -p option creates each parent/child directory in the specified path instead of executing individual mkdir commands for each sub-directory. Replace example-com as appropriate.

sudo mkdir -p /www/example-com/public_html

Step 9 – Create Apache vhost Files

vhost files are used to configure Apache to serve files from the new DocumentRoot. For this step, I’ve modified the instructions from the Bitnami “Create a custom PHP application” guide. Depending on which Bitnami image was used to create the Lightsail instance, the following command returns either Approach A or Approach B.

test ! -f "/opt/bitnami/common/bin/openssl" && echo "Approach A: Using system packages." || echo "Approach B: Self-contained installation."

In my example, Approach A is returned so the following command sequence is appropriate for that path. These commands create copies of the existing vhost sample files for both http and https. vim is then used to edit both files to reflect the new DocumentRoot from Step 8. Replace/rename example-com as appropriate.

cd /opt/bitnami/apache2/conf/vhosts
cp sample-vhost.conf.disabled example-com-vhost.conf.disabled
vim example-com-vhost.conf.disabled
mv example-com-vhost.conf.disabled example-com-vhost.conf

Modify the DocumentRoot and Directory sections to /www/example-com/public_html following the sample file format.

If you’re not familiar with editing in vim, please spend a few minutes learning the basic keyboard commands. The file opens in command mode. To make changes to the file, press i to start editing. After completing the revisions, press escape to return to command mode. Once satisfied with the changes, enter :w to write (save) the file. To exit vim, enter :q in command mode.

Make the same changes to the https version of the .conf file.

cp sample-https-vhost.conf.disabled example-com-https-vhost.conf.disabled
vim example-com-https-vhost.conf.disabled
mv example-com-https-vhost.conf.disabled example-com-https-vhost.conf

Restart Apache.

sudo /opt/bitnami/ctlscript.sh restart apache

Step 10 – Enable and Start SSH (Optional)

This step may not be necessary. On the actual Lightsail instance, SSH is already enabled and running so these next commands are not needed. However, on my local VirtualBox instance of the Bitnami LAMP VM, SSH is not running.

To verify ssh is enabled and running, execute the following command. sshd is listed in the output if it is running

ps aux | grep sshd

If it is enabled and running, execute the following command to find the port number SSH is listening on.

netstat -plant | grep :22

If SSH is not running, execute the following commands to enable and start SSH. These commands follow the Bitnami “Enable or Disable the SSH Server” guide.

sudo rm -f /etc/ssh/sshd_not_to_be_run
sudo systemctl enable ssh
sudo systemctl start ssh

Step 11 – Use Putty SFTP to Upload Migration Archive

I use PuTTY PSFTP to transfer the wordpress_migration.tar.gz file (created in Step 2) from my local machine to the Lightsail instance. Replace <static ip> with the actual static IP address and replace the path and filename as appropriate. This sequence of commands transfers the file to the Lightsail user account’s home directory on the instance.

open <static ip>
put c:\local\path\to\wordpress_migration.tar.gz
exit

Step 12 – Stop and Disable SSH (Optional)

Like Step 10, this step may not be necessary. If SSH was not running in Step 10 and you want revert to that state, e.g. stop/disable SSH, then execute the following commands. Otherwise skip this step.

sudo systemctl stop ssh
sudo systemctl disable ssh

Step 13 – Unpack the WordPress Files

In this step, the archive file containing all of the WordPress files from the existing site are unpacked and moved into the DocumentRoot defined in the vhost files. Recall in Step 2 that the parent directory migration/public_html is used in the archive structure. Since public_html is also used in the DocumentRoot directory structure, it makes the file movement easier. Adjust the following sequence of commands to match the directory structure defined in the migration file and DocumentRoot.

These commands unpack the migration archive into the home directory. In my example, this creates a folder structure of ~/migration/public_html containing all of the WordPress files from my existing site. Once the files are unpacked, the contents of ~/migration/public_html are moved to /www/example-com/public_html. Then the appropriate permissions are set to give Apache access to the directories and files and wp-config.php is secured. Apache is running as daemon in this example, but it may vary from instance to instance.

cd ~
tar -xzvf wordpress_migration.tar.gz
cd migration
sudo mv ./public_html /www/example-com
sudo chown -R daemon:daemon /www/example-com/public_html
sudo find /www/example-com/public_html -type d -exec chmod 755 {} \;
sudo find /www/example-com/public_html -type f -exec chmod 644 {} \;
sudo chmod 600 /www/example-com/public_html/wp-config.php

Step 14 – Create a New Database and Import Existing SQL Data

If the suggested directory structure from Step 2 was used, then the MySQL export file named wrdprss.sql from the existing site will be located in the migration directory when the archive is unpacked. The following sequence of commands creates a new database and imports the existing data. This sequence also assumes WordPress is not using root to access the database. Please make sure to use the user/password from the existing site to re-create the MySQL user/service account to access the database.

If a brand new user/password is created in this step, then wp-config.php must be updated to reflect the new user/service account. After the site is up-and-running, the service account password should be changed in case it was compromised in transit during the migration.

The output from the command “cat ~/bitnami_credentials” is the root password. Use the password when prompted after executing “mysql -u root -p”. Again, adjust the sequence to reflect the database name, user, password, and MySQL dump filename and path.

cat ~/bitnami_credentials
mysql -u root -p
CREATE DATABASE db_example_com_wp_prd;
CREATE USER 'u_example_com_wp_prd'@'localhost' IDENTIFIED BY '<password>';
GRANT ALL PRIVILEGES ON db_example_com_wp_prd.* TO 'u_example_com_wp_prd'@'localhost';
FLUSH PRIVILEGES;
USE db_example_com_wp_prd;
SOURCE ~/migration/wrdprss.sql
EXIT 

Step 15 – Access the Site Using Static IP Address

Navigate to the site using the AWS static IP address to verify the site is accessible and the migration was successful. The browser may display a warning if the site is redirected to https since new certificates have not yet been issued for this domain/IP combination.

Step 16 – Update DNS A Resource Record to AWS Static IP

Update the A resource record for the domain to use the AWS static IP. The DNS update needs to propagate prior to issuing new certificates in Step 17. Instructions to update the resource record are specific to your DNS provider.

Step 17 – Install SSL Certificates

If the site allows/forces https, then use the bncert-tool to create certificates through Let’s Encrypt. Execute the command and follow the on-screen instructions.

sudo /opt/bitnami/bncert-tool

Step 18 – Access the Site Using Domain Name

Navigate to the site using the domain name (instead of the AWS static IP address) to verify the site is up-and-running as expected.

Step 19 – Cleanup

If everything is working as expected, the final step is to remove the migration files from the home directory. This sequence deletes the migration archive as well as the unpacked migration directory.

cd ~
rm -f wordpress_migration.tar.gz
rm -rf migration

Further Reading

At this point, I hope your site has been migrated successfully to Amazon Lightsail. While the above steps will get a site up-and-running on Amazon Lightsail, it does not cover other aspects of managing your own server such as on-going maintenance and security.

For further reading, please consider:

Hardening WordPress Security Using .htaccess

Automate Amazon Lightsail Maintenance Activities Using Bash Script

Bash Script to Automatically Backup Amazon Lightsail WordPress Server to Google Drive