Notes on Encryption

Recently I have been running out of hard disk space. I hummed and hawed a bit before deciding to upgrade - there's only so much porn you can delete, after all. Over the past few days I have been banging my head against a variety of issues - issues which are finally solved. Herein I leave a note for myself should I ever want to set this stuff up again, and maybe even help some people in the future who stumble over this in Google. As a result: expect lameness and boredom. Feel free to skip it.

The Old Setup and The Upgrade Plan

I had a PC with 2x 200GB IDE drives and a 500GB external drive for backup. I'd set it up so that this external drive was encrypted - it's pretty easy to swipe an external drive, after all, and I wouldn't want other people to get at my passwords, financial details, and naked pictures of the exes. The original plan was just to upsize this setup - get 2 500GB drives and a 1TB external drive for backup. This wouldn't give me any overhead for incremental backups (where you store, say, 3 days' worth of system images but only the changes between them are stored) but I'm not planning to fill them all up, so I could do the incremental backup for a few more years yet.

Then through some conversations with colleagues and online I reconsidered. Turns out it would cost me less to get 4x 500GB internal drives. I could get more disk space for less money, and build a RAID array for redundancy. Instead of backing up the whole system I could back up the bits that matter - programming, music, photographs - on the existing 500GB external drive. I can generate 8GB of files from a photoshoot but even then the 500GB would give me some overhead. A cockup at this point meant that where I thought my motherboard had 4 IDE channels it actually only has two, so I had to go get a PCI IDE controller to keep using my DVD drive. £20 from Maplins - not the cheapest, but the most convenient.

I had copies of all my data on the backup drive, and I was removing the 2 200GB drives with the existing system on them. Normally when I upgrade hard disks I end up copying the old system onto the new one, so it results in the same system but with more space available to it. I figured that this time, though, trying to jibble a system image into something that will boot and set up RAID was about as much effort as doing a fresh install and copying /home to the new system so I'd do that instead. After about 6 years of using Debian and having to wrestle it whenever I wanted to do something new I thought I'd give Ubuntu a shot. The software seems newer and the configuration more straightforward.

Oh, there was one more thing: I wanted the whole system to be encrypted.

The New Setup, and How to Achieve it

So I now had a PC with 4x 500GB hard disks, an Ubuntu 7.10 install CD, and a plan. We'd take a slice of 300MB from the beginning of each disk. On the first two disks we'd use this as a RAID1 array (mirroring) for /boot. This would be the only unencrypted partition. On the last two disks the 300MB slices would form a RAID0 array (striping) that would be used for encrypted swap. The remaining large partitions on the 4 disks would be used for a RAID5 array (striping with distributed parity). On top of this sits LUKS/dm-crypt for full-disk encryption, and on top of this sits LVM for resizing purposes. Diagramatically:

Raw devices → Software RAID → Encryption → LVM → filesystem.
(Instructions on installing using the alternative installer here)

You'd think that this would be all you need, right? Well, no. The installer fucks up and this system won't boot. It'll appear to hang before announcing it can't find vg-root. You need to edit your GRUB commandline (hit esc during startup and follow the instructions) to add the following magic:

cryptopts=source/dev/md1,target=filesystem

source is the device where your encrypted root filesystem lives; target is just a friendly name, as far as I know. Boot this and it'll prompt you for the passphrase you set up during the install, and you should find your new system boots. This wasn't documented anywhere, by the way - one had to delve into the initrd to find out what it was trying to read and what it was expecting.

Once it's booted up, run sudo vim /boot/grub/menu.lst and change the two lines that begin # kopt so they include the cryptopts line above. Save and quit, then run sudo update-grub to write this to your boot sector.

You can now set up your encrypted swap. This one's a doddle: edit /etc/crypttab so it looks like this:

# <target name> <source device>         <key file>      <options>
cswap           /dev/md2                /dev/random     swap

And then add a line in /etc/fstab:

# <file system>     <mount point>   <type>  <options>       <dump>  <pass>
/dev/mapper/cswap   none            swap    sw              0       0

Reboot, and you'll find you've got encrypted swap. It uses a random key each time so your swap is never recoverable - data that's in memory is safe from recovery.

Is it Worth it?

The jury is still out. I've been restoring data from a caddied IDE drive, and get a sustained rate of about 14MB/sec. It does more in bursts - 45MB/sec, which is less dire, but still not great. How much of this is down to the encryption, how much is down to the software RAID, and how much is down to the IDE interface being shit I'm not sure. If I were to do this over I'd get a SATA card and get SATA drives. If I'd known I'd have ended up buying an IDE card I'd have done the same thing again. Frankly, the easier cabling that SATA provides would have been worth it alone. I'm not that impressed so far - my plan is to try this out for a week and see if the system feels so sluggish I'm compelled to reformat and scrap the encryption. I might pick a halfway house of having /home encrypted, which would probably be adequate.

Why encryption?

The performance/security tradeoff depends on a few things: how powerful your system is, how dubious your activities are, and how paranoid you want to get. My greatest practical fear is theft; in the UK the police can compel you to hand over your encryption keys and/or your data anyway (it's a criminal offence not to, and the law has been used). I figure any higher organisation would be a bit more mercenary anyway - either rooting the machine while it's still powered on or installing a hardware keylogger. Force a brief power cut and you're golden. And if you're really a high-profile target they'll beat your passphrase out of you.

So why bother? Here are a few thoughts:

  1. If your computer is stolen then your data isn't in jeopardy. Yeah, you've lost the machine, but you don't have to change your passwords, credit cards, and so on.
  2. If the cops ever sieze your PC you're protected slightly more. If your data is readable then someone's going to read it; if they're going to have to serve notices and warrants and so on to get access there's going to be consideration of whether it's actually worth jumping through the hoops rather than just going on a fishing expedition.
  3. This is pure conjecture, but if you ever got a nastygram from the RIAA I'd imagine you're on a much better footing. I don't think the civil courts can force you to decrypt your data. This might not matter, though - from what little I know about the filesharing lawsuits they're relying on logs from your ISP rather than images of your HD.
  4. It's one of the few things you can do these days to check the power of big government. If you're worried about moving into a panopticon-style society then you can claw back a little peace of mind.

Rejigging Things

So, let's say you've put your HDDs all on two IDE channels, because you didn't know it's a really bad idea. So you open up the case, move the HDs over to one per IDE channel, and slave a DVD drive off it. This will probably not boot, so you:

  1. Boot up from a boot CD.
  2. Use ls /dev/hd* and ls /dev/sd* to find out what your drives are.
  3. Run sudo apt-get install cryptsetup lvm2 mdadm to get the relevant tools you need. Then sudo su to get a root shell.
  4. Use mdadm -Q /dev/hda1 etc. to figure out which partitions are part of which RAID array.
  5. mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 etc. to build your three RAID arrays.
  6. modprobe dm-crypt dm-mod aes-i586.
  7. cryptsetup luksOpen /dev/md1 root to get a /dev/mapper/root device.
  8. vgchange -a y vg (if you called your LVM volume vg, like I did). vgscan -v if you need to redetect your volume groups now you've opened your LUKS volume.

The Benefit of Hindsight

After trying the setup above I did end up making some changes. First up, there is absolutely no point in creating a RAID array for your swap - the Linux kernel is actually smart enough to stripe between devices if you have more than one swap partition configured.

Secondly the system is way too slow if you try and run the whole thing encrypted. I redid it so that I had an unencrypted / and an encrypted /home - 2 partitions on the md1 array via LVM. So the stack looks like this now:

                Raw devices → Software RAID → LVM → Encryption → filesystem. 
                                               ↓ 
                                            filesystem.

Recovering from a Failed Drive

As luck would have it one of my drives in the RAID failed, so I got to try out the RAID in anger. It worked very well, though not seamlessly. The crashing drive hung the system, for instance, and the system didn't come up cleanly on booting. The RAID system won't automatically start if you're missing a drive. You'll get a similar situation to that in the install - the boot process will appear to hang for a few minutes, before dropping you at an initramfs prompt. dmesg will give you a good idea of the failed drive, and cat /proc/mdstat will show you more details.

I've got a Western Digital drive, and they do a programme known as Advance RMA - where in exchange for your credit card details they dispatch a new drive to you before you have to send the old one back. It took about a week to arrive even though it was going via UPS, and the tracking feature never worked - but it turned up as promised without any hassle. You have a month to return the drive before you get charged. One benefit to this is that you get an approved box to return your drive.

Anyway, to replace a failed drive in your RAID array:

  1. Power down and swap the dead drive out.
  2. Boot. You may find that the system comes back normally this time; it did for me.
  3. cat /proc/mdstat to double-check which drive has failed.
  4. Copy the partition table from an old drive to the new drive: sfdisk -d /dev/olddisk | sfdisk /dev/newdisk
  5. ls /dev/newdisk* to check that your partitions appeared OK.
  6. Add these partitions to your RAID array: mdadm --manage /dev/md0 --add /dev/newdisk1 and so on for all drives you have.
  7. cat /proc/mdstat to check your array is rebuilding.

This page was useful to me for this.