From FedoraProject

Revision as of 16:16, 2 July 2010 by Crobinso (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


About Xen

Xen is an open source virtual machine system. More information on Xen itself can be found at and the Fedora Xen page.

Fedora is following the 3.0.x Xen line. Xen 3.0.0 was released in December of 2005 and is incompatible with guests using the previous Xen 2.0.x releases.

Quick Start

Setting up Xen and guests in Fedora Core 6 has some significant changes and improvements since the release of Fedora Core 5. The following guide will explain how to set up Xen, and how to create and manage guests using either the command line or GUI interface.

System Requirements

  • Your system must use GRUB, the default boot loader for Fedora[[FootNote(This is required because you actually boot the Xen hypervisor and it then starts the Linux kernel. It does this using the MultiBoot standard.)]
  • Sufficient storage space for the guest operating systems. A minimal command-line Fedora system requires around 600MB of storage, a standard desktop Fedora system requires around 3GB.
  • Generally speaking, you will want to have 256 megs of RAM per guest that you wish to install.

Para-virtualized guests

Any x86_64, or ia64 CPU is supported for running para-virtualized guests. To run i386 guests requires a CPU with the PAE extension. Many older laptops (particularly those based on Pentium Mobile / Centrino) do not have PAE support. To determine if a CPU has PAE support, run the following command:

$ grep pae /proc/cpuinfo
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts

The above output shows a CPU that does have PAE support. If the command returns nothing, then the CPU does not have PAE support.

Warning (medium size).png
New PAE Requirement for i386 Guests.
Previous versions of the Xen kernel did not require PAE support. This change will affect laptop users.

Fully-virtualized guests (HVM/Intel-VT/AMD-V)

To run fully virtualized guests, host CPU support is needed. This is typically referred to as Intel VT, or AMD-V. Xen uses a generic 'HVM' layer to support both CPU vendors. To check for Intel VT support look for the 'vmx' flag, or for AMD-V support check for 'svm' flag:

....For Intel....
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm

....For AMD....
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy

If you have the 'svm' or 'vmx' flags, then your CPU is capable of fully-virt, however, a large number of machines disable this in the BIOS as shipped from the factory. Thus to see if it is enabled we need to check for the 'hvm-???' flags in the hypervisor capability set:

xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64

If the above shows one or more 'hvm-???' flags then everything is working normally. If no hvm is shown, then reboot, go into the BIOS & look for a setting related to 'Virtualization' - every BIOS manufacturer calls the setting by a different name. If you are really very unlucky, it is possible the BIOS does not have a virtualization option. In this case the only option is to contact your hardware vendor for an updated BIOS :-(


Commands which require root privileges are prefixed with the character '#'. To become root, issue the command 'su -' as a normal user and supply the root password.

Installing the Xen Software

When doing a fresh install of Fedora Core 6, you can specify that Xen should be installed by selecting Xen in the Base Group in the installer.

If you already have a Fedora Core 6 system installed, you can install the Xen kernel and tools by running the following command:

This installs the required packages and their dependencies. 'kernel-xen' contains the Xen-enabled kernel for both the host and guest operating systems as well as the hypervisor. Also, the 'xen' package will be installed, which contains the user-space tools for interacting with the hypervisor.

Once this is done, there should be an entry in the file /boot/grub/grub.conf for booting the xen kernel. The xen kernel is not set as the default boot option.

To set GRUB to boot with kernel-xen by default, edit /boot/grub/grub.conf and set the default to the xen [[FootNote(Note that you can change so that future kernel-xen packages are the default kernel by editing /etc/sysconfig/kernel)]

This is an example /boot/grub/grub.conf configured to boot into the Xen hypervisor:

title Fedora Core (2.6.18-1.2784.fc6)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-1.2784.fc6 ro root=LABEL=/1 rhgb quiet
initrd /boot/initrd-2.6.18-1.2784.fc6.img

title Fedora Core (2.6.18-1.2784.fc6xen)
root (hd0,0)
kernel /boot/xen.gz-2.6.18-1.2784.fc6
module /boot/vmlinuz-2.6.18-1.2784.fc6xen root=LABEL=/1
module /boot/initrd-2.6.18-1.2784.fc6xen.img

Enabling Xen

Once the system is booted into the Xen kernel, check to verify the kernel and that Xen is running:


Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                           0      610     1 r----- 12492.1

The above output should show that the xen kernel is loaded and that Domain-0 (the host operating system) is running.

Warning (medium size).png
Note that for the default setup, networking for guest OS's is bridged.
This means that they will get an IP address on the same network as your host, thus, if you have a DHCP server providing addresses, you will need to ensure that it is configured to give addresses to your guests. You can change to another networking type by editing /etc/xen/xend-config.sxp

Building a Fedora Guest System

With Fedora Core 6, installation of Xen Fedora Core 6 guests using anaconda is supported. The installation can be started on the command line via the virt-install program or in the GUI program virt-manager.

Building a Fedora Guest System using virt-install

Start the interactive install process by running the virt-install program:

The following questions about the new guest OS will be presented. This information can also be passed as command line options; run with an argument of --help for more details. In particular, kickstart options can be passed with -x ks=options.

  1. What is the name of your virtual machine? This is the label that will identify the guest OS. This label will be used for various xm commands and also appear in virt-manager the Gnome-panel Xen applet. In addition, it will be the name of the /etc/xen/<name> file that stores the guest's configuration information.
  2. How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.
  3. What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest.
  4. How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a "default" install
  5. Would you like to enable graphics support (yes or no): Should the graphical installer be used?
  6. What is the install location? This is the path to a Fedora Core 6 installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:
Installation must be a network type. It is not possible to install from a local disk or CDROM. It is possible, however, to set up an installation tree on the host OS and then export it as an NFS share.

The installation will then commence. If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, the standard text installer will appear. Proceed as normal with the installation.

Building a Fedora Guest System using virt-manager

Start the GUI VIrtual Machine Manager by running the following command as root:

If Virtual Machine Manager is opened from the Gnome menu under System Tools, rather than from a root command prompt, it will not prompt for the root password and will be limited to read-only access.
  1. Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
  2. Click the "New" button at the bottom of the virt-manager window, or select File-->New.
  3. A wizard will present the same questions as appear with the virt-install command -line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option.
  4. On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.

Building a Fedora Guest System using 'cobbler' and 'koan'

Cobbler is a tool for configuring a provisioning server for PXE, Xen, and existing systems. See for details. The following instructions are rather minimal and more configuration options are available.

First, set up a provisioning server:

yum install cobbler
man cobbler # read the docs!
cobbler check # validate that the system is configured correctly
cobbler distro add --name=myxendistro --kernel=/path/to/vmlinuz --initrd=/path/to/initrd.img
cobbler profile add --name=myxenprofile --distro==myxendistro [--kickstart=/path/to/kickstart] 
cobbler list # review the configuration
cobbler sync # apply the configuration to the filesystem

Alternatively, cobbler can import a Fedora rsync mirror and create profiles automatically from there. Some of the imported distros will be Xen profiles and some will be for bare metal. Usage of the Xen profiles will be required. See the manpage for details.

cobbler import --mirror=rsync://your-fedora-mirror --mirror-name=fedora
cobbler sync

On the system that will host the image:

yum install koan
koan --virt --profile=myxenprofile --server=hostname-of-cobbler-server

After Installation

When the installation of the guest operating system is complete, it can be managed using the GUI virt-manager program or on the command line using xm.

Managing Virtual Machines graphically with virt-manager

Start the GUI Virtual Machine Manager by running the following command:

If you are not root, you will be prompted to enter the root password. If you merely wish to use the UI in read-only mode for monitoring, you can also choose to Run unprivileged

  • Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
  • The list of virtual machines is displayed in the main window. The first machine is called "Domain 0"; this is the host computer.
  • If a machine is not listed, it is probably not running. To start up a machine select "File-->Restore a saved machine..." and select the file that serves as the guest's disk.
  • The display lists the status, CPU and memory usage for each machine. Additional statistics can be selected under the "View" menu.
  • Double click the name of a machine to open the virtual console.
  • From the virtual console, select "View-->Details" to access the machine's properties and change its hardware configuration
  • To access the serial console (if there is a problem with the graphical console) select "View-->Serial Console"

For further information about virt-manager consult the project website

Bugs in the virt-manager tool should be reported in BugZilla against the 'virt-manager' component

Managing Virtual Machines from the command line with virsh

Virtual machines can be managed on the command line with the virsh utility. The virsh utility is built around the libvirt management API and has a number of advantages over the traditional Xen xm tool:

  • virsh has a stable set of commands whose syntax & semantics will be preserved across updates to Xen.
  • virsh can be used as an unprivileged user for read-only operations (eg listing domains, getting info, etc)
  • virsh will (in future) be able to manage QEMU, VMWare, etc machines in additional to Xen, since libvirt is hypervisor agnostic.

To start a new virtual machine from an XML vm definition:

To list the virtual machines currently running, use:

To gracefully power off a guest use:

To save a snapshot of the machine to a file of your choosing:

To restore a previously saved snapshot:

To export the XML config associated with a virtual machine:

For a complete list of commands available for use with virsh run:

Or consult the manual page virsh(1)

Bugs in the virsh tool should be reported in BugZilla against the 'libvirt' component

Managing Virtual Machines from the command line with xm

In addition to the virsh command, virtual machines can also be managed on the command line with the Xen-specific xm utility. To power on a virtual machine and attach a serial console, use:

To list the virtual machines currently running, use:

To power off a guest use:

To save a snapshot of the machine to a file of your choosing:

To restore a previously saved snapshot:

To display top-like statistics for all running machines:

For a complete list of commands available for use with xm run:

Bugs in the xm tool should be reported in BugZilla against the 'xen' component



The SELinux policy in Fedora Core 6 has the neccessary rules to allow use of Xen with SELinux enabled. The main caveat to be aware of is that any file backed disk images need to be in a special directory - /var/lib/xen/images. This applies both to regular disk images, and ISO images. Block device backed disks are already labelled correctly to allow them to pass SELinux checks.

Log files

There are two log files stored on the host system to assist with debugging Xen related problems. The file /var/log/xen/xend.log holds the same information reported with 'xm log. Unfortunately these log messages are often very short and contain little useful information. The following is the output of trying to create a domain running the kernel for NetBSD/xen.

[2005-06-27 02:23:02 xend]  ERROR (SrvBase:163) op=create: Error creating domain:(0, 'Error')
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/xen/xend/server/", line 107, in _perform
val = op_method(op, req)
File "/usr/lib/python2.4/site-packages/xen/xend/server/", line 71, in op_create
raise XendError("Error creating domain: " + str(ex))
XendError: Error creating domain: (0, 'Error')

The second file, /var/log/xen/xend-debug.log usually contains much more detailed information. Trying to start the NetBSD/xen kernel will result in the following log output:

ERROR: Will only load images built for Xen v3.0
ERROR: Actually saw: 'GUEST_OS=netbsd,GUEST_VER=2.0,XEN_VER=2.0,LOADER=generic,BSD_SYMTAB'
ERROR: Error constructing guest OS

When reporting errors, always include the output from both /var/log/xen/xend.log and /var/log/xen/xend-debug.log.

If starting a fully-virtualized domains (ie to run unmodified OS) there are also logs in /var/log/xen/qemu-dm*.log which can contain useful information.

Finally, hypervisor logs can be seen by running the command

xm dmesg

Serial Console

Host serial console access

For more difficult problems, serial console can be very helpful. If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host. Serial console lets you capture it on a remote host.

You need to set up the Xen host for serial console output, and set up a remote host to capture it. For the console output, you need to set appropriate options in /etc/grub.conf, for example:

title Fedora Core (2.6.17-1.2600.fc6xen)
root (hd0,2)
kernel /xen.gz-2.6.17-1.2600.fc6 com1=38400,8n1 sync_console
module /vmlinuz-2.6.17-1.2600.fc6xen ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off
module /initrd-2.6.17-1.2600.fc6xen.img

for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.) The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console. "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console. Once that is done, you can install and set up ttywatch (from fedora-extras) to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote

Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log

Paravirt guest serial console access

Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using

Alternatively, the graphical virt-manager program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.

Fullyvirt guest serial console access

Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:

Alternatively, the graphical virt-manager program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.

Accessing data on a guest disk image

There are two tools which can help greatly in accessing data within a guest disk image: lomount and kpartx. Remember never to do this while the guest is up and running, as you could corrupt the filesystem if you try to access it from the guest and dom0 at the same time!

  • lomount

lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the device-mapper-multipath RPM) is preferred:

  • kpartx
add map guest1p1 : 0 208782 linear /dev/xen/guest1 63
add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845

Note that this only works for block devices, not for images installed on regular files. To use file images, you'll need to set up a loopback device for the file first:

add map loop0p1 : 0 208782 linear /dev/loop0 63
add map loop0p2 : 0 12370050 linear /dev/loop0 208845

In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper:

brw-rw---- 1 root disk 253,  6 Jun  6 10:32 xen-guest1
brw-rw---- 1 root disk 253, 14 Jun  6 11:13 guest1p1
brw-rw---- 1 root disk 253, 15 Jun  6 11:13 guest1p2

To access LVM volumes on the second partition, we'll need to rescan LVM with "vgscan" and activate the volume group on that partition (named "VolGroup00" by default) with "vgchange -ay":

Reading all physical volumes.  This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
2 logical volume(s) in volume group "VolGroup00" now active
LV        VG         Attr   LSize   Origin Snap%  Move Log Copy%
LogVol00  VolGroup00 -wi-a-   5.06G
LogVol01  VolGroup00 -wi-a- 800.00M
Stop (medium size).png
Always remember to deactivate the logical volumes with "vgchange -an", remove the partitions with "kpartx -d", and (if appropriate) delete the loop device with "losetup -d" after you are finished. There are two reasons: first of all, the default volume group name for a FC install is always the same, so if you end up activating two disk images at the same time you'll end up with two separate LVM volume groups with the same name. LVM will cope as best it can, but you won't be able to distinguish between these two groups on the command line.

And secondly, if you don't deactivate it, then if the guest is started up again, you might end up with the LVM being active in both the guest and the dom0 at the same time, and this may lead to VG or filesystem corruption.

Frequently Asked Questions

  • Q: I am trying to start the xend service and nothing happens, then when I do a xm list1 I get the following:
Error: Error connecting to xend: Connection refused.  Is xend running?

Alternatively, I run <code>xend start manually and get the following error:

ERROR: Could not obtain handle on privileged command interface (2 = No such file or directory)
Traceback (most recent call last):
File "/usr/sbin/xend", line 33, in ?
from xen.xend.server import SrvDaemon
File "/usr/lib/python2.4/site-packages/xen/xend/server/", line 21, in ?
import relocate
File "/usr/lib/python2.4/site-packages/xen/xend/server/", line 26, in ?
from xen.xend import XendDomain
File "/usr/lib/python2.4/site-packages/xen/xend/", line 33, in ?
import XendDomainInfo
File "/usr/lib/python2.4/site-packages/xen/xend/", line 37, in ?
import image
File "/usr/lib/python2.4/site-packages/xen/xend/", line 30, in ?
xc = xen.lowlevel.xc.xc()
RuntimeError: (2, 'No such file or directory')

A: You have rebooted your host into a kernel that is not a xen-hypervisor kernel. Yes I did this myself in testing :)

You either need to select the xen-hypervisor kernel at boot time or set the xen-hypervisor kernel as default in your grub.conf file.

  • Q. When creating a guest the message "Invalid argument" is displayed.

A. This usually indicates that the kernel image you are trying to boot is incompatible with the hypervisor. This will be seen if trying to run a FC5 (non-PAE) kernel on FC6 (which is PAE only), or if trying to run a bare metal kernel.

  • Q. When I do a yum update and get a new kernel, the grub.conf default kernel switches back to the bare-metal kernel instead of the Xen kernel

A. The default kernel RPM can be changed in /etc/sysconfig/kernel. If it is set to 'kernel-xen', then the Xenified kernel will always be set as default option in grub.conf

Getting Help

If the Troubleshooting section above does not help you to solve your problem, check the Red Hat Bugzilla for existing bug reports on Xen in FC6. The product is "Fedora Core", and the component is "kernel" for bugs related to the xen kernel and "xen" for bugs related to the tools. These reports contain useful advice from fellow xen testers and often describe work-arounds.

For general Xen issues and useful information check the Xen project documentation , and mailing list archives .

Finally, discussion on Fedora Xen support issues occur on the Fedora Xen mailing list