The Ubuntu Hypervisor stack consists of qemu-kvm and libvirt at its core. QEMU provides the userspace emulation, KVM provides the kernel acceleration, and libvirt provides an abstraction layer for applications to interface with various hypervisors at an API level.

This page is dedicated to enumerating and tracking the testing of some of the basic and advanced features of this hypervisor stack.

For basic documentation, see:


Feature: Nested KVM

Nested KVM is possible for AMD machines, but not yet for Intel ones. Further (to my surprise), on AMD, you can only nest amd64 on amd64, or i386 on i386. You cannot nest an i386 VM inside an amd64 VM.

To start a VM in which you wish to nest another vm, use the -enable-nesting flag to kvm. Then just call kvm as usual inside that VM.

Nesting is not yet supported in libvirt by default. To use it, I did the following:

* Created a new wrapper called kvm.nested

cat > /usr/bin/kvm.nested << EOF
/usr/bin/kvm $* -enable-nesting

* Allow libvirt to use that wrapper

cat >> /lib/apparmor.d/abstractions/libvirt-qemu << EOF
  /usr/bin/kvm.nested rmix,
/etc/init.d/apparmor restart

* Then in your vm.xml, use:


instead of the usual


* Finally, while not necessary, I used LVM partitions for the nested guest rather than container files.

QEMU Feature: Serial Console

libvirt serial console

QEMU Feature: VNC

QEMU Feature: virtio disks

QEMU Feature: virtio net



libvirt save/restore VM

libvirt+qemu hot-add

live migration

libvirt NAT network

libvirt private network

kvm bridged network

apt-get remove network-manager wicd

cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

/sbin/ifconfig tap1 up
brctl addif br0 tap1

dhclient eth0

save / restore


==== libvirt ===

As of libvirt 0.8.1, a new snapshot API is supported. The backing file must be qcow2, and the disk definition for the VM must include a line defining it as type qcow2, i.e.:

<driver name'qemu' type='qcow2' cache='writethrough'>

Then, while the machine is running, you can do

virsh snapshot-create VM-name

That command returns a snapshot ID, i.e. '12798656811'. You can list all snapshots with 'virsh snapshot-list VM-name', and you can restore from a snapshot with 'virsh snapshot-revert VM-name snapshot-name'. This however only snapshots memory, not disk. The most promising route for doing full snapshots from libvirt is the soon-to-come ability to send arbitrary qemu monitor commands through libvirt. This will not likely be available in libvirt 0.8.3.


Started up several KVM instances, and looked at


which went up.


mkdir /var/lib/tftpboot/pxelinux.cfg
cat > /var/lib/tftpboot/pxelinux.cfg/default << eof
label default
  kernel vmlinuz-2.6.35-9-generic
  initrd initrd.img-2.6.35-9-generic
  append root=/dev/sda ro

cp /boot/{vmlinuz-2.6.35-9-generic,initrd.img-2.6.35-9-generic} \
cp /usr/lib/syslinux/{memdisk,menu.c32,pxelinux.0,vesainfo.c32,vesamenu.c32} \

dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/ --conf-file= --listen-address --except-interface lo --dhcp-range, --dhcp-lease-max=253 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-boot=pxelinux.0

since it was unable to find a root fs. (Installing an nfs server to serve a root fs would get us past that)

GPXE (Etherboot)

etherboot package (on lucid and maverick)

gpxe upstream (on lucid)

git clone git://
cd gpxe/src
kvm -fda bin/gpxe.dsk -net nic -net user -bootp


iscsi boot

apt-get install tgt open-iscsi-utils open-iscsi
dd if=/dev/zero of=/var/lib/tgtd/kvmguests/a.img bs=1M seek=10240 count=0
dd if=/dev/zero of=/var/lib/tgtd/kvmguests/shareddata.img bs=1M count=512
tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2004-04.fedora:fedora13:iscsi.kvmguests
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /var/lib/tgtd/kvmguests/a.img 
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2 --backing-store /var/lib/tgtd/kvmguests/shareddata.img 
tgtadm --lld iscsi --op bind --mode target --tid 1 --initiator-address ALL
cat > /root/pool.xml << EOF
        <pool type='iscsi'>
          <host name='localhost'/>
          <device path='iqn.2004-04.fedora:fedora13:iscsi.kvmguests'/>
virsh pool-define /root/pool.xml 
virsh pool-start kvmguests

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/disk/by-path/ip-'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>


Not yet tested, but came across this page, which might be helpful:

USB passthrough

Using Libvirt

        virsh start maverick2

{{ Bus 002 Device 006: ID 1058:1023 Western Digital Technologies, Inc. }}}

<hostdev mode='subsystem' type='usb'>
                <vendor id='0x1058'/>
                <product id='0x1023'/>

sudo virsh attach-device maverick2 /tmp/a.xml

/dev/bus/usb/*/[0-9]* rw,

to either /etc/apparmor.d/libvirt-qemu (which gives all guests full access to physical host devices) or to


which will give only the one guest that access. (Thanks to jdstrand for help getting that straight.)

Using KVM

Make sure to start kvm with the '-usb' flag and opening a monitor (say using '-monitor stdio'). Then pass the usb device using the monitor command:

 usb_add host:1058:1023

using the same vendor/product ids as above, or by invoking on the command line like this:

  kvm -usb -usbdevice host:1390:0001 ...

VirtFeatureVerification (last edited 2011-01-22 15:36:50 by cpe-66-69-252-85)