Paul Kiddie

Installing Xen on Ubuntu 8.04 and creating Ubuntu 8.04 domU's (guests)

January 17, 2010

I’ve recently been looking into virtualisation methods within Linux to find an optimal way of creating a network testbed of virtual machines. The machines are relatively high powered for this task (AMD Athlon 64 3700+ with 2GB RAM) and are more than able to run more than one Linux instance. The only real limitation I can see is the number of PCI slots the motherboard has for network adapters. :)

The aim is to use the testbed to test and develop routing protocols, implemented through the use of Netfilter kernel modules and a user space module with all the routing logic in it. See my earlier posts on creating simple Netfilter modules.

My virtualisation wish list was the following:

  1. The ability to dedicate network adapters to virtual instances without the use of virtual network adapters and drivers
  2. Minimize virtualisation overheads (memory, IO, network)
  3. Remotely administer instances in a cluster configuration
  4. Ability to install kernel modules on virtual instances.
  5. Do this without hardware assistance - the Athlon 64 3700+ (socket 939) doesn’t support AMD-V)

I looked at three possible ways to virtualise in Linux:

  • KVM (kernel-based virtual machine) provides full virtualisation and requires hardware that supports Intel VT or AMD-V extensions. Guests are virtualised without requiring modification, though  paravirtualised disk and network drivers are provided in order to get better performance in VM’s with intensive I/O.
  • Xen supports both para-virtualisation and hardware-assisted virtualisation. In paravirtualisation, computers without Intel VT or AMD-V can run guest OS’, but these OS (a domU in xen terminology) need to be modified to suit.
  • OpenVZ is an operating system-level virtualisation technology where a single kernel running on the host is containerised into multiple, isolated instances, providing near to “bare-metal” performance.

My wish list pretty much dictated the type of virtualisation I chose:

  • Point 1. ruled out full virtualisation techniques that virtualise the network adapter. I want to ensure that the measurements taken are not skewed from the overheads of running a virtual network adapter. In addition, I want to dedicate a network adapter per virtualised instance. In Xen the documented way is to use pciback on the host to capture the necessary PCI cards, and then assigning the PCI device directly to the guest. In OpenVZ this can be achieved by creating a container with the –netdev_add flag.
  • Point 2. brought Xen and OpenVZ virtualisation to the forefront: Xen provides high performance through paravirtualisation, whilst OpenVZ guests run on a partition of the host kernel with little overheads (1-3% was quoted in various OpenVZ literature).
  • Point 3. Highlighted VM specific distributions such as Proxmox, cluster management tools such as Ganeti and distribution specific VM managers such as virt-manager.
  • Point 4. was a very specific requirement to the testbed and ruled out the use of OpenVZ due to the shared kernel approach.
  • Point 5. ruled out KVM and the hardware-assisted branch of Xen.

So Xen paravirtualisation was chosen, with Ubuntu 8.04 as the dom0 (host) and domU (guests). This article is based on an aggregation of work from a number of sources, so thanks to these people! :)

Installing Ubuntu 8.04.3 with xen-server-3.3 as Dom0, and Ubuntu 8.04.3 as DomU

Preparing host ubuntu 8.04 for Dom0 kernel

Install ubuntu 8.04 as normal from server install

Once installed,

sudo su
/etc/init.d/apparmor stop
update-rc.d -f apparmor remove #removes the firewall running on host
apt-get update && apt-get upgrade #upgrades server to 8.04.3)

Then, enable backports in /etc/apt/sources.list by removing the comments from the backports lines in this file, to enable access to xen 3.3. Once you’ve done this:

apt-get update
apt-get install ubuntu-xen-server #will install newer 3.3 rather than 3.2

Fix some error messages:

mv /lib/tls /lib/tls.disabled

Line 186 of /etc/init.d/xendomains has a repeated cut statement, replace:

rest=\`echo "$1" | cut cut -d\\  -f2-\`


rest=\`echo "$1" | cut -d\\  -f2-\`

Install some missing dependencies:

apt-get install python-xml python-twisted

Patch issue with xen add command, still present from xen 3.1.

echo 'oldxml' > /usr/lib/python2.5/site-packages/oldxml.pth

Restart machine, login and check active kernel is now a xen based kernel:

uname –r #should be current kernel version + "-xen"

Suppress possible error messages when guests start up

vi /etc/sysctl.conf

Adding the line: xen.independent_wallclock=1

Preparing xen for install of Ubuntu 8.04 (hardy) guests

Create a folder for xen images, which can be altered to suit.

mkdir /home/xen #folder for xen images

Now lets update xen-tools.conf configuration file before creating guests.

vi /etc/xen-tools/xen-tools.conf

Specifically: dir = /home/xen (should be the same as above, and is where guest domains and images will reside) dist = hardy arch = i386 gateway = 192.168.x.x(replace with your network config) netmask = broadcast = 192.168.x.255 mirror = (UK ubuntu mirror; replace with your local mirror) size = 20Gb (this one is up to you)

For each VPC

xen-create-image –hostname=testY –ip=192.168.x.y –ide –force #assigns ip 192.168.x.y to the guest

Add, or expand the “extra” line in /etc/xen/testy.cfg, to include extra="clocksource=jiffies". Now add and start the VM:

xm create /etc/xen/testY.cfg –c #starts VM and attaches to console

Now you are in VM:

vi /etc/sysctl.conf

Add the line: xen.independent_wallclock=1 Fix some warnings and upgrade the VM:

mv /lib/tls /lib/tls.disabled
apt-get update && apt-get upgrade #upgrades server VM to 8.04.3

Fixing bug with tty

After installing a guest, Xen seems to modify a file on the host it shouldn’t do (a known bug). So, on the host:

vi /etc/event.d/tty1

And change the line: exec /sbin/getty 38400 xvc0 to exec /sbin/getty 38400 tty1

Fixing bug in assigning PCI cards to domU

There seems to be a breaking change made between Xen 3.2 and Xen 3.3 which prevents PCI cards on the same bridge being assigned to different guests. I wanted to assign an Ethernet card to each domU instance, which looked possible, but Xen complained, saying I had to add all PCI cards. Thanks to, the patch submitted restores Xen 3.2 fuctionality in paravirtualised guests. From what I understand this assignment restriction is due to limitations in AMD-V and Intel IOMMV, but it should not affect Xen in paravirtualized mode. The lines I added are in bold type. Make sure you use correct indentation as they are Python files.

In /usr/lib/python2.5/site-packages/xen/util/

665         def do\_FLR(self):
666         """ Perform FLR (Functional Level Reset) for the device.
667         """
**668         return**
669         if\_type == DEV\_TYPE\_PCIe\_ENDPOINT:

In /usr/lib/python2.5/site-packages/xen/xend/server/

375         pci\_dev\_list = pci\_dev\_list + \[(domain, bus, slot, func)\]
377         for (domain, bus, slot, func) in pci\_dev\_list:
**378                         continue**
379                         try:

Now reboot the machine and Python should recompile the modified files, without error.

Seizing PCI devices for use in domU guests

This is a feature in Xen that allows you to seize a PCI card from the host and allocate it directly to the guest VM, without needing to virtualise the adapter in any way. Type lspci to determine the id’s of the PCI cards you want to assign to a domU (VM), and note these down. These id’s are in the form xx:xx.x. In my case these were 01:07.0, 01:08.0 and 01:09.0 corresponding to each of the network adapters.

Now we’ll edit the boot entry on the Xen host in order to seize the PCI cards from the host.

vi /boot/grub/menu.lst

Go to the active kernel (i.e. one called Xen 3.3 / Ubuntu 8.04.3 / …) and add the following text to the end of the module part: pciback.hide=(xx:xx.x)(yy:yy.y). In my case, I needed pciback.hide=(01:07.0)(01:08.0)(01:09.0)

Now reboot, and check pciback has successfully seized the appropriate devices by typing:

dmesg | grep pciback

Now go to configuration file for the VM you wish to interface to the PCI card, e.g. /etc/xen/test.conf and add in following line – I did so under vnet.

pci = [’xx:xx.x’,’yy:yy.y’]

For my configuration, this was:

pci = [’01:07.0’,’01:08.0’].

Now start the Xen VM in the normal way. You shouldn’t get any errors about co-assigning a PCI device, as this should have been dealt with by applying the above patch. Type lspci within the VM and you should see your PCI device assigned to the VM.

Clone VPC

This assumes the VM to clone is named old_dom and the new VM is new_dom

mkdir /home/xen/domains/new\_dom
dd if =”/home/xen/domains/old\_dom/disk.img” of=”/home/xen/domains/new\_dom/disk.img” bs=4k
dd if=”/home/xen/domains/old\_dom/swap.img” of=”/home/xen/domains/new\_dom/swap.img” bs=4k
cp /etc/xen/old\_dom.cfg /etc/xen/new\_dom.cfg

Edit the new_dom configuration file

vi /etc/xen/new\_dom.cfg

In disk: rename any references from old_dom to new_dom

In name: rename from old_dom to new_dom

In vif: you’ll need to use different ip and different mac

Add cloned VM to Xen:

xm create /etc/xen/new\_dom.cfg –c

The cloned VM will still have the hostname of the old_dom, so to fix:

hostname new\_dom

You might also need to update entries in /etc/hosts.

👋 I'm Paul Kiddie, a software engineer working in London. I'm currently working as a Principal Engineer at trainline.