Tuesday, September 08, 2009

How to create and save an AMI image from a running instance

One snag I encountered early on in my migration of Cragwag and Sybilline to Amazon's EC2 Cloud, was that I needed to take a snapshot of my running instance and save it as a new Amazon Machine Image (AMI).

I'd created a bare-bones Debian image from a public AMI (32-bit Lenny, 5.0, not much else) and then installed a few standard software packages on it - mysql, ruby, apache, etc etc etc. Once I'd got them configured the way I wanted, it had taken a couple of hours (I'll go into the configuration relating to EBS in a separate post) so I wanted to snapshot this instance as a new AMI image. That way, if and when I needed to create a new instance, all of this work would already have been done.

It actually took a fair amount of time to find out (well, more than a few seconds Googling, which is just eternity these days, y'know?) so I'll save you the pain and just give you the solution.

First, install Amazon's AMI tools, and API tools:


export EC2_TOOLS_DIR=~/.ec2 #(or choose a directory here)
cd $EC2_TOOLS_DIR
mkdir ec2-ami-tools
cd ec2-ami-tools
wget http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip
unzip ec2-ami-tools.zip
ln -s ec2-ami-tools-* current
cd ..
mkdir ec2-api-tools
cd ec2-api-tools
wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
unzip ec2-api-tools.zip
ln -s ec2-api-tools-* current

echo "export EC2_AMITOOL_HOME=`dirname $EC2_TOOLS_DIR`/ec2-ami-tools/current" >> ~/.bashrc
echo "export EC2_APITOOL_HOME=`dirname $EC2_TOOLS_DIR`/ec2-api-tools/current" >> ~/.bashrc
echo "export PATH=${PATH}:`dirname $AMI_TOOLS_DIR`/ec2-ami-tools/current/bin:`dirname $AMI_TOOLS_DIR`/ec2-api-tools/current/bin" >> ~/.bashrc
source ~/.bashrc


Next, you'll need to get your security credentials. You can get a reminder of - or create as needed - these on the AWS "Your Account" > "Security Credentials" page.

I recommend you saving your X.509 certificate and your private key somewhere under /mnt/ - this directory is excluded from the bundled image. Quite important that, as otherwise your credentials would be bundled up in the image - and if you ever shared that image with anyone else, you'd be sharing your credentials too!

You'll also need to note your AWS access details - especially your access key and secret key - plus your Amazon account ID.

Now, we're at the main event.

To take a snapshot of your running instance:

First, choose a name for your AMI snapshot. We'll call it ami-instance-name :)


# make a directory for your image:
mkdir /mnt/ami-instance-name

# create the image (this will take a while!)
ec2-bundle-vol -d /mnt/ami-instance-name -k /path/to/your/pk-(long string).pem -c /path/to/your/cert-(long string).pem -u YOUR_AMAZON_ACCOUNT_ID_WITHOUT_DASHES


Once that's done, you should have a file called image.manifest.xml in your /mnt/ami-instance-name directory, along with all the bundle parts. Sometimes it will say Unable to read instance meta-data for product-codes - but this doesn't seem to cause any problems, and I've successfully ignored it so far :)

Next, upload the AMI image to S3. This command will create an S3 bucket of the given name if it doesn't exist - I've found it convenient to call my buckets the same as the instance name:

ec2-upload-bundle -b ami-instance-name -m /mnt/ami-instance-name/image.manifest.xml -a YOUR_AWS_ACCESS_KEY -s YOUR_AWS_SECRET_KEY


You should then be able to register the instance. I've done that using the rather spiffy AWS Management Console web UI, but you can also do it from the command line using:

ec2-register ami-instance-name/image.manifest.xml


And that's it!

Of course, you could be cunning and create a script that does it all in one. I've got my AWS/EC2 credentials stored in environment variables from my .bashrc:

export EC2_PRIVATE_KEY=/mnt/keys/pk-(long string).pem
export EC2_CERT=/mnt/keys/cert-(long string).pem
export AWS_ACCOUNT_ID=(my account id)
export AWS_ACCESS_KEY=(my AWS access key)
export AWS_SECRET_KEY=(my AWS secret key)


which means I can make, upload and register an instance in one, by running this script:

#!/bin/bash

$AMI_NAME=$1

ec2-bundle-vol -d /mnt/images/$1 -k $EC2_PRIVATE_KEY -c $EC2_CERT -u $AWS_ACCOUNT_ID
ec2-upload-bundle -b $1 -m /mnt/images/$1/image.manifest.xml -a $AWS_ACCESS_KEY -s $AWS_SECRET_KEY
ec2-register $1/image.manifest.xml


...and giving it a parameter of ami-instance-name. I have that script saved as make_ami.sh, so I can just call, for instance:

make_ami.sh webserver-with-sites-up-and-running


...and go have a cup of coffee while it does it's thing.

Moving a site to The Cloud

Last week I did a lot of reading and research into cloud hosting. The "Cloud" has been a buzzword for a while now, often bandied about by those who know no better as a simple sprinkle-on solution for all of your scale problems - much in the same way as AJAX was touted around a few years ago as a magic solution to all of your interface problems.

The perception can sometimes seem to be Hey, if we just shift X to The Cloud, we can scale it infinitely!. The reality, of course, is something rather more qualified. Yes, in theory, the cloud has all the capacity you're likely to need, unless you're going to be bigger than, say, Amazon - (are you? Are you reeeeeeeally? C'mon, be honest...) - provided - and it's a big proviso - that you architect it correctly. You can't just take an existing application and dump it into the cloud and expect to never have a transaction deadlock again, for instance. That's an application usage pattern issue that needs to be dealt with in your application, and no amount of hardware, physical or virtual, will solve it.

There are also some constraints that you'll need to work around, that may seem a little confusing at first. But once I got it, the light went on, and I became increasingly of the opinion that the cloud architecture is just sheer bloody genius.

What kind of constraints are they? Well, let's focus on Amazon's EC2, as it's the most well-known....

  • Your cloud-hosted servers are instances of an image
    They're not physical machines - you can think of them as copies of a template Virtual Machine, if you like. Like a VMWare image. OK, that one's fairly straightforward. Got it? Good. Next:

  • Instances are transient - they do not live for ever
    Bit more of the same, this means that you create and destroy instances as you need. The flipside is that there is no guarantee that the instance you created yesterday will still be there today. It should be, but it might not be. EC2 instances do die, and when they do, they can't be brought back - you need to create a new one. This is by design. Honestly!

  • Anything you write to an instance's disk after creation is non-persistent
    Now we're getting down to it. This means that if you create an instance of, say, a bare-bones Linux install, and then install some more software onto it, and set up a website, then the instance dies - everything you've written to that instance's disk is GONE. There are good strategies for dealing with this, which we'll come onto next, but this is also by design. Yes, it is...

  • You can attach EBS persistent storage volumes to an instance - but only to one instance per volume
    This one is maybe the most obscure constraint but is quite significant. Take a common architecture of two load-balanced web servers with a separate database server. It's obvious that the database needs to be stored on a persistent EBS volume - but what if the site involves users uploading files? Where do they live? A common pattern would be to have a shared file storage area mounted onto both web servers - but if an EBS volume can only be attached to one instance, you can't do that.

Think about that for a few seconds - this has some pretty serious implications for the architecture of a cloud-hosted site. BUT - and here's the sheer bloody genius - these are the kind of things you'd have to deal with for scaling out a site on physical servers anyway. Physical hardware and especially disks are not infallible and shouldn't be relied on. Servers can and do go down. Disks conk out. Scaling out horizontally needs up-front thought put into the architecture. The cloud constraints simply force you to accept that, and deal with it by designing your applications with horizontal scaling in mind from the start. And, coincidentally, provide some kick-ass tools to help you do that.

Take, for example, the last bullet point above - that EBS volumes can only be attached to one instance. So how do you have file storage shared between N load-balanced web servers? Well, the logical thing to do is to have a separate instance with a big persistent EBS volume attached to it, and have the web servers access it by some defined API - WebDAV, say, or something more application-specific.

But hang on.... isn't that what you should be doing anyway?. Isn't that a more scalable model? So that when your fileserver load becomes large, you could, say, create more instances to service your file requests, and maybe load-balance those, and....

See? It forces you to do the right thing - or, at least, put in the thought up front as to how you'll handle it. And if you then decide to stubbornly go ahead and do the wrong thing, then that's up to you... :)

So, anyway, I wanted to get my head round it, and thought I'd start by shifting Cragwag and Sybilline onto Amazons EC2 cloud hosting service. I did this over a two day period - most of which, it has to be said, was spent setting up Linux the way I was used to, rather than the cloud config - and I'll be blogging a few small, self-contained articles with handy tips I've learned along the way. Stay tuned....