Re-Cloudifying

With my blog rebuild out of the way, my next task was to replace my old 2010-era Amazon EC2 t1.micro instance with a new less-expensive t2.nano instance. Without the blog, my EC2 needs are minimal, and the nano instances are really cheap.

My plan was to poke around the old micro instance, make a list of what was worth saving, spin up the new nano instance in the same AWS availability zone, copy things over, then terminate the old instance.

That was a solid plan.

Unfortunately, I couldn't resist the desire to play around with upgrading the micro instance to newest Ubuntu release (just to see what the process was like). The result was that my micro instance would no longer boot. I tried attaching a volume created from an EBS volume snapshot to that instance, but that didn't work. I tried creating a new AMI and instance from the old snapshot, but that didn't work either. I'd completely lost all access to my instance.

(Note to self: Test your restore-from-backup plan once in a while, dumbass.)

So, fine, new plan: Create the new nano instance, attach the old volume to it, then copy whatever is needed from that volume.

I also wanted my new setup to meet these requirements:

  • Ensure the blog is no longer dependent on anything on the server
  • Require minimal configuration of the server (so this will be easier next time)
  • Ensure everything I want on my server is easily available from external sources (GitHub, etc.)
  • Ensure I can easily restore from an EBS snapshot if this ever happens again

My longer-term plan is to experiment with Docker containers and other mechanisms for deploying and managing stuff in the cloud, but for now I just want a new server to fill the roles of my old server.

Creating the New Instance

I referred to my original Drupal-on-EC2 post while setting up the new nano instance. I did basically the same thing, except that I started with Amazon's "Ubuntu Server 16.04 LTS (HVM)" AMI, and chose the "t2.nano" instance type.

I set up the a new security group that allows SSH and HTTP/HTTPS from outside.

I wanted to assign my existing Elastic IP to the new nano instance, but the EC2 console wouldn't let me do that. After some head-scratching and Googling, I learned that elastic IPs for the old "Classic EC2" and new "VPC" environments are in separate pools, so I couldn't reuse the old one for the new instance. So I created a new Elastic IP and assigned it to my new instance, then I deallocated the old one, and updated my DNS records.

Mounting the Volume from the Old Instance

Now I wanted to get my old stuff. I created a new EBS volume from a snapshot from the old instance's boot volume, in the same availability zone as my new instance. Then I attached that volume to my new instance as /dev/sdf. To figure out how Ubuntu saw that new device, I ran this command:

lsblk

which indicated that the available devices were named xvda (mounted as /) and xvdf. I mounted the volume as /olddata using these commands:

sudo mkdir /olddata
sudo mount /dev/xvdf /olddata

then a quick ls /olddata/home/ubuntu and ls /olddata/usr/share/drupal6 verified that all my stuff from the old instance was indeed present. (Phew!)

I created a compressed archive of the volume's contents so I could download it to my laptop for easy browsing and permanent backup:

sudo tar -zcvf olddata.tar.gz /olddata

The archive was 2.7 GB, and full of lots of stuff I'll never need, but it's good to be thorough.

Migrating SSH Keys

I copied my SSH keys from the old instance so that I could use GitHub and other resources without generating and uploading new keys:

cp /olddata/home/ubuntu/.ssh/id_rsa* ~/.ssh

Setting Up Git

Git was already included in the Ubuntu install, but I needed to configure the username and email that would be used for any local commits;

git config --global user.name "Kristopher Johnson"
git config --global user.email "kris@kristopherjohnson.net"

I also added this setting to squelch an annoying Git warning message:

git config --global push.default simple

What's Good on TCM?

My What's Good on TCM? page is static HTML that is generated by a cron job at 11:00 AM every morning. All the code is on GitHub.

My plan is to eventually move the generated page to a GitHub Project Page, so that it is not served by this server anymore, but for the sake of expediency I just set up the same cron job on the new server and configured Apache as needed to serve the generated page at that URL.

I had to install Apache to serve the generated page, and Node, npm, and Make to be able to run the page generator:

sudo apt-get install apache2 nodejs npm make

My scripts expect to be able to run an executable named "node", but Ubuntu installs /usr/bin/nodejs, so I set up a symlink in /usr/local/bin to allow "node" to work:

sudo ln -s /usr/bin/nodejs /usr/local/bin/node

I had to change my script so that it would copy the generated tcm.html file to /var/www/html, rather than to /var/www (which was the correct location for Ubuntu 12.04).

To give the ubuntu account write access to the /var/www/html directory, I ran these commands:

sudo usermod -a -G www-data ubuntu
sudo chgrp www-data /var/www/html
sudo chmod g+x /var/www/html

Enabling cgi-bin

I have some Perl CGI scripts that people depend on. Here is what I did to enable the /cgi-bin/ paths:

sudo apt-get install libcgi-pm-perl libapache2-mod-perl2
sudo a2enmod cgi
sudo chgrp www-data /usr/lib/cgi-bin
sudo chmod g+w /usr/lib/cgi-bin
# ... Move my scripts into /usr/lib/cgi-bin ...
sudo service apache2 restart

Eliminating Blog Dependencies

My blog is now hosted at GitHub. However, some articles contain links to images and other files that were served by Drupal on the old server, so I needed to extract those from the old Drupal site directories and copy them to my blog repo. I was able to use Pelican's EXTRA_PATH_METDATA settings to give these static files the same URL path that they had in Drupal, so I didn't have to update any links in the blog posts that referred to them.

All for Now

So, the result is I have a reasonably-up-to-date Ubuntu server, and my blog is no longer dependent on it. I can reproduce this configuration easily by following the instructions above.

I have an archive of everything that was on the old server, so at my leisure I can poke around and find anything worth saving.

(Note to self: Make a backup.)

© 2003-2017 Kristopher Johnson