MySQL Backups

Your data are valuable. If, God forbid, some disaster befalls your database then you should have a plan in place for how to recover your data. In this post I describe a simple strategy for backing up a MySQL database. This might not be the best approach, but it has worked for me.

Create a Backup User

It’s useful to have a dedicated backup database user which has unrestricted read access to all of the data.

mysql> GRANT LOCK TABLES, SHOW VIEW, SELECT ON *.* TO 'backup'@'%' IDENTIFIED BY '6TwikEchNid{';
mysql> FLUSH PRIVILEGES;

Setup a .my.cnf file to store the credentials. This will allow you to use this user for automated backups which do not require you to manually enter credentials.

[client]
user=backup
password=6TwikEchNid{

This is not particularly secure, especially if you have sensitive data, but at least the database user only has read access. If the user account is secure then the chances of .my.cnf being compromised are slim.

Run Backup

At this point we can make a test dump of a database. Suppose that the database is called “product”.

$ mysqldump --single-transaction product

That should dump a profusion of SQL to the terminal. Use Ctrl-C to stem the onslaught.

Backup Script

Create a script to perform the backup. This is just a guideline but it does the following:

  • create a timestamped backup;
  • compress the backup;
  • copy to S3; and
  • store locally (removing older local backups).
#!/bin/bash

# Name of the database.
#
DB=product

# S3 bucket used to store backups.
#
S3BUCKET=s3://product-backups

# Where will backups be stored locally?
#
BACKUPDIR=$HOME/backup

TIMESTAMP=$(date +"%Y-%m-%d-%H%M")

mkdir -p $BACKUPDIR/$TIMESTAMP

# Dump contents of database.
#
mysqldump --single-transaction $DB >$BACKUPDIR/$TIMESTAMP/$DB.sql

# Compress.
#
bzip2 $BACKUPDIR/$TIMESTAMP/$DB.sql

# Copy to S3.
#
s3cmd put --recursive $BACKUPDIR/$TIMESTAMP $S3BUCKET

# Delete old (local) backups.
#
find $BACKUPDIR/* -type f -mtime +15 -exec rm {} \;

Save this script as ~/bin/backup-database.sh.

Test the script manually and validate that (1) it’s creating a local backup and (2) the backup is copied to S3.

Add this to your crontab.

0 8 * * * $HOME/bin/backup-database.sh

Restore

If the worst happens then you’ll need to restore from a backup.

First create a new database.

$ mysqladmin -u root -p create product
# or
$ mysql -u root -p -e "CREATE DATABASE product;"

Next restore a backup (probably the most recent one, but possibly an older one).

$ bzip2 -dc backup/2019-07-19-0800/product.sql.bz2 | mysql -u root -p product

I’d suggest performing a “dry run” of these instructions (using a distinct database name) just to ensure that you get the anticipated results.