In my last post I covered how to get a proper dump of the database.

I have, of course, verified the backup of the database. For ghost you can easily do so by spinning up another instance with ghost - this will require a DNS entry or update depending on if it's a replacement or a separate verification instance. Services such as Digital Ocean make it easy to spin up an instance for a couple hours and then destroy them. Update it to the same version, go through the initial setup, and stop ghost using the ghost stop command. You can then directly import the database by using the mysql command, specifying the user, -p to ask for the authentication password, and specifying the sqldump file as the input using "<". Getting that dump file to your new instance is a matter of uploading using rsync with ssh authentication, or temporarily uploading it to Dropbox or another service that will provide, or let you finagle, a direct download link workable with wget. Start ghost up again, and log in using your old credentials, and your database is there. Image links will of course be broken unless you also took the time to copy across those as well, but your tags, structure, users, settings, etc. will be there.

The good news for users of ghost, as well as Wordpress, is that almost all of the dynamic data such as posts, settings, etc., are in the database, and backing up the rest of the critical information such as themes, theme changes, and images/files, can be done by backing up a handful of directories.

As an aside - for ghost, you will want to keep track of what version you have as this does impact the database structure. This won't change unless you manually update so you can keep this noted down where appropriate every time you update. It's also possible to update ghost on a reinstall to a specific version within a major version.

First though, we need to make sure that the script we put together fires off on a regular basis, so on the instance/VPS running your blog, you'll need to make a crontab entry either under the global crontab (sudo crontab -e in ubuntu) or under another account that has permissions to access the needed directories and the backup script.


29 13 * * * /home/ghostadminuser/ghost_bak_script.sh >> /home/ghostadminuser/ghost_db_backups/backup-ghost.log

The above fires off at half past one PM local time for the VPS.

So now we have a regularly scheduled dump of the database that deletes files older than a set number of days. Next step - pulling it all down, or across. The example I'll use is linux based, so if you have a linux box at home, a linux VPS elsewhere, or a linux-based home NAS such as a Synology or QNAP, the below will work with some minor modifications for directory paths and paths to bash.

I created three scripts, similar to the following. Yes, I could have created one, and run it with different inputs, or consolidated all three into one, but I wanted one script per job.


#!/bin/bash
# The above is the usual path

# Change user name and server as needed - e.g. "ghostadminuser"
USER="NAME_OF_USER_WITH_SSH_ACCESS_ON_THE_REMOTE_SERVER" 
SERVER="addressofblog.myblog.com"
PORT="22"

# This is the path to a copy of the key you otherwise use for interactive ssh access...
SSHID="/path/to/my/key/mykey.pem"

# edit FULL path for images, theme, or DB backups on remote server.
# from my previous example teh DB backups would be at:
# "/home/ghostadminuser/ghost_db_backups/""
SOURCE="/var/www/ghost/content/themes/mytheme/"


# path mod based on source - "images" etc. - address of local folder that will be replicated to 
# should be different for each source folder
# note for theme I added a subfolder matching the name of the theme
# for images I just used 
# TARGET="/path/to/blog/bak/images"
# similar for the DB backups
TARGET="/path/to/blog/bak/themes/mytheme"

# rename LOG file based on backup source , "images", etc.
LOG="/path/to/blog/bak/themes_backup.log"


# My log export of the event, which I dress up just a little to space out backup entries for human readability 
# vice machine parsing.
echo '' >> $LOG
echo '-------------' >> $LOG
echo `date` >> $LOG
echo '-------------' >> $LOG

/usr/bin/rsync -avz --delete --progress -e "ssh -p $PORT -i $SSHID" $USER@$SERVER:$SOURCE $TARGET >> $LOG 2>&1

echo '' >> $LOG
echo '--------------------------' >> $LOG
echo 'Backup complete' >> $LOG
echo '--------------------------' >> $LOG
echo '' >> $LOG
echo '' >> $LOG
echo '' >> $LOG

One note here - I access my VPS instances via a ssh key for authentication, as is the normal practice these days. This script assumes that. Also - you may have to run the script at least once manually under the user context it will be scheduled via cron (or whatever utility for a NAS box) to accept the RSA fingerprint.

So I manually ran the scripts, verified the files copied down, accepted the RSA fingerprints, etc., scheduled them, and reverified the logs to ensure they ran. As it happens, I had to fix permissions in my .ssh/config file so that it was only accessible to the user for the script to fire off.