WordPress on Amazon EC2 – Part 11– Care and feeding (backups mainly)
So, now that you have your Wordpress blog going, something that you have to deal with is backups.Like I was writing a while back the cloud is not invulnerable. All of this "cloud" stuff is still built on servers just like anything else. Amazon's servers are built very well, but they are still physical things that can break.Let's break this down into a few different parts:
- Server hardware
- Storage
Each of these failures can be dealt with in different ways.
Server Failure
This is the easier one to deal with. With the micro server you don't have any "local" storage, everything is stored in EBS. EBS is like hard drive in the cloud. This is more reliable than just a normal hard drive -- it already has some redundancy built in. In the typical failure mode all that happens is the server goes away and the storage is still around. All you have to do is start up a new server with the existing EBS and you're back in business.All that being said, EBS can still fail. We had a famous incident where we had a datacenter went down in a storm. We still need a way of dealing with anything that might hit the fan.
True Backup
So, let's store our blog and everything in a good reliable spot. Also happens to be in Amazon.hard drives in three different data centers -- in different regions. It might not survive an all out nuclear war, but more or less up to that point.First off, we need to get our security credentials:S3 (Simple Storage Service) is the service we'll use for this. If you store something in S3 you basically know it'll be safe. It's stored on at least three At the end of it you get an access key and a secret key.The keys you get here you can only see once, so copy them somewhere safe. They are really long random collections of letters and symbols.We'll create an S3 bucket to store our backups. Go to the S3 part of the AWS console and create a new bucket:
We'll use the AWS console interface. All you have to do is choose a unique name. In my case I'm using something like "vec-backup".Next, we need a way of getting files to S3. Enter: s3cmd.
$ sudo wget -P /etc/yum.repos.d/ http://s3tools.org/repo/RHEL_6/s3tools.repo$ sudo yum install -y s3cmd$ s3cmd --configure
The first wget sets up the repository from which we'll get the new command. Next we use our friend "yum" to install s3cmd. Finally we configure it. It'll ask you for the keys you just got.At this point we're ready to back things up.First (and this you only need to this once):
$ cd ~$ mkdir backup$ cd backup
This just makes a new directory for backups. "~" is your home directory.Something to note -- I'm using today's date as an example of a backup filename. You can use whatever you want.
$ mysqldump --all-databases -uroot -p|gzip >db-backup-20140401.sql.gz$ sudo tar cf - /var/www/ /etc/httpd/conf/ /etc/httpd/conf.d/ /etc/php.d/ /etc/php-fpm.d |gzip >site-20140401.tar.gz&$ s3cmd put *20140401* s3://your-bucket-name/20140401/$ rm *20140401*
- mysqldump - you'll be asked for the root password by the way - This will copy the entire database onto it's standard output. This gets "piped" into gzip to compress that output and saves it into a file.
- tar - we've seen this before: tape archive - Last time we extracted stuff, this time we're combining. Like last time we're then compressing that and saving that into a file.
- Lastly, we upload both of those files to the new S3 bucket you've created
- rm will remove the backups that have now been transferred to S3
This should get everything uploaded into S3. Now you're all backed up!
Restore
Whenever you back something up, you need to be able to restore stuff too.What you'll want to do is set run through the setup up to this point... then do
$ s3cmd get *20140401* s3://your-bucket-name/20140401/
This gets everything downloaded from S3.Next, let's restore the database:
gunzip <db-backup-20140401.sql.gz | mysql -u root -p
We'll uncompress the database and pipe that into mysql. The backup is basically the script to recreate the database and we'll just run the script.Last, we'll un-tar the file backup in place (we just copied all of the configurations we messed with in place into the archive so we can extract everything right back to where we got things from) and restart Apache.
$ sudo tar -c / -xfvz ~/site.tar.gz$ sudo service httpd restart
Next time: transferring from another hosting provider!