My Natty Upgrade Test

Written by Barry Warsaw in technology on Fri 15 April 2011. Tags: test drive, ubuntu, virtual machine,

Ubuntu 11.04 (code name: Natty Narwhal) beta 2 was just released and the final release is right around the corner. Canonical internal policy is that we upgrade to the latest in-development release as soon as it goes beta, to help with bug fixing, test, and quality assurance.

Now, I've been running Natty on my primary desktops (my two laptops) since before alpha 1, and I've been very impressed with the stability of the core OS. One of my laptops cannot run Unity though, so I've mostly been a classic desktop user until recently. My other laptop can run Unity, but compiz and the wireless driver were too unstable to be usable, that is until just before beta 1. Still, I diligently updated both machines daily and at least on the classic desktop, Natty was working great. (Now that beta 1 is out, the wireless and compiz issues have been cleared up and it's working great too.)

The real test is my beefy workstation. This is a Dell Studio XPS 435MT 12GB, quad-core i7-920, with an ATI Radeon HD 4670 graphics card, running dual-headed into two Dell 20" 1600x1200 flat panel displays. During the Maverick cycle I was a little too aggressive in upgrading it, because neither the free nor the proprietary drivers were ready to handle this configuration yet. I ended up with a system that either couldn't display any graphics, or didn't support the dual heads. This did eventually all get resolved before the final release, but it was kind of painful.

So this time, I was a little gun shy and wanted to do more testing before I committed to upgrading this machine. Just before Natty beta 1, I dutifully downloaded the daily liveCD ISO, and booted the machine from CD. On the surface, things seemed promising. I had compiz and Unity, but no dual-head. Running from the liveCD is fairly transient though; it doesn't save enough state between reboots to be a fair test of the machine.

How could I get a true test of Natty that would give me my normal development environment, run natively on the hardware, and yet be easily discarded if it turned out to not be ready yet? Here's where USB hard drives and virtual machines come in.

I'm a very heavy user of virtual machines. With plenty of disk space on this 1.5TB drive, I have maybe a dozen VMs. This let's me run the stable Ubuntu release as a host, and have VMs for several versions of Debian, several older versions of Ubuntu, and even flavors of Windows, Solaris, FreeBSD, and Fedora. (Let me know if you've successfully made a libvirt guest running as a hackintosh for OS X!). Some day I'll post about safe, copy-on-write backing disks.

Now, the cool thing about these VMs is that I bridge the network and give all of them unique IP addresses on my internal network, so for all intents and purposes, they are real machines. Most of them I access only through ssh, but virt-manager gives a nice graphical desktop when you need it. So I can mostly test just about anything on any x86 or amd64 operating system, with my full normal development environment.

So here was my thinking: create a Natty VM that wouldn't use the normal virtual disk living in a file on the host file system. Instead, its disk would actually be on a 320GB USB external drive that I had laying around. I'd install Natty to this VM using the daily liveCD, get my full development environment up and running, then I'd shut down the host, and reboot it to the USB drive. This would give me a persistent Natty running natively on the hardware, and if it didn't work out, I'd just reboot back to the internal drive. No fuss, no muss.

Let's cut to the chase: I eventually got this to work, and spent a day putting Natty through its paces. After few pre-beta-1 updates, I was satisfied that all my hardware was working great, including full resolution dual-head with Unity. That gave me the confidence to upgrade the host OS running on the internal drive, and I've been happily using it since then.

Of course, all was not smooth sailing. I did run into a few hitches along the way.

When the primary host operating system mounted the USB drive, it often got different /dev device assignments. Sometimes it would come up as /dev/sdc and sometimes as /dev/sdg. The problem with this is that libvirt wants to use the /dev name as the device for the VM's file system, so when this changed, I'd have to go into the VM's configuration and fix the storage device path. This is kind of a pain through the virt-manager U/I since the storage device has to be deleted and then re-added. You can't just change the path (I'll bet it can be done by editing the .xml configuration file directly).

When creating the storage device for the VM, you have several options for the disk type. At first, I naturally chose USB disk, since in fact, the physical device was a USB drive. But this tended to cause the Ubiquity installer no end of trouble. It hung very early in the installation process. I never did investigate that or get it working, but I did realize that if you use a virtio disk and point it at the USB device path (see above), the installer worked perfectly.

You have to be sure your host machine's BIOS can boot off of a USB drive. Luckily this is a new enough machine that it works fine, but older machines may not be able to boot off a USB drive. I think you could probably just do everything I've done above, but installing that spare drive internally and it would work just as well. But of course opening up the case is a PITA. :)

Grub was a little finicky. I first tried this just a few days before beta 1, and every time I installed the OS to the USB drive, then booted off the drive I'd immediately be dropped into a grub> prompt, with no way to complete the boot process. It was like grub was not getting installed correctly, but at the time I also wasn't sure whether this idea could even work. Note that I did not want to add the USB drive to the chain loader on the primary internal drive, I just wanted to hit F12 during the BIOS phase and select the USB drive to boot off of. I thought this should work, but it didn't and I thought maybe something about the virtual environment caused grub to fail when run natively. I asked around on our internal tech list, but I think this was really a transient problem with the pre-beta ISOs. I tried again a day or so later with a new liveCD and everything worked perfectly.

Of course, when booting off the USB drive natively, a few things are different. The virtual machine has a different MAC address, and thus a different host name and IP address than the native host. So after booting off the USB drive, you have to fiddle with a few things to get that to line up, depending on how you configure your machine. E.g. /etc/hosts was incorrect. These minor problems didn't really slow me down though.

Ignoring the transient, inexplicable grub issue, this turned out to be a very nice way to test drive the new Ubuntu version on the native hardware, in a completely non-destructive way. I'll bet it would work to test drive other operating systems as well, and if one of my fantastic readers gives me a clue about how to build a hackintosh in a VM, I'll give that a try for the fun of it. And of course I'll let you know how it goes!


Comments

comments powered by Disqus