From Fedora Project Wiki

m (work in progress, more edits to come)
(depreciated -> deprecated)
(91 intermediate revisions by 33 users not shown)
Line 1: Line 1:
Previous Fedora Virtualization Guides can be found here:
{{autolang|base=yes}}
[[Docs/Fedora7VirtQuickStart| Fedora7VirtQuickStart]]
[[Docs/Fedora8VirtQuickStart|  Fedora8VirtQuickStart]]


== Virtualization in Fedora ==
This page deals with using Fedora to host virtual guests. For information the different virtualization technologies available in Fedora, see the [[Tools/Virtualization |dedicated page]].


Fedora includes support for both the KVM and the Xen virtualization platforms. For more information on different virtualization platforms, see http://virt.kernelnewbies.org/TechComparison.


More information on Xen can be found at http://wiki.xensource.com/xenwiki/ and the Fedora [[Tools/Xen|  Xen]] page. More information on KVM can be found at http://kvm.qumranet.com/kvmwiki.
== Using virtualization on Fedora ==


Fedora is following the 3.0.x Xen line.  Xen 3.0.0 was released in December of 2005 and is incompatible with guests using the previous Xen 2.0.x releases.
Fedora uses the libvirt family of tools as its virtualization solution. By default libvirt on Fedora will use Qemu to run guest instances.  


== Quick Start ==
For information on other virtualization platforms, refer to http://virt.kernelnewbies.org/TechComparison.


The following guide will explain how to set up Xen and KVM, and how to create and manage guests using either the command line or GUI interface.
Qemu can emulate a host machine in software, or given a CPU with hardware support (see below) can use [http://www.linux-kvm.org KVM] to provide a fast full virtualization.  


=== System Requirements ===
Other virtualization products and packages are available but are not covered by this guide.
* For KVM, the system must have a CPU with virtualization support.
* Sufficient storage space for the guest operating systems. A minimal command-line Fedora system requires around 600MB of storage, a standard desktop Fedora system requires around 3GB.
* Generally speaking, at least 256 megs of RAM per guest plus 256 for the base OS. Practically speaking, it is hard to do work with virtualization with less than 1 GB of RAM.


==== Requirements for Para-virtualized Guests ====
== Installing and configuring Fedora For virtualized guests ==


{{Admon/warning | Only Xen supports para-virtualized guests. KVM is full virtualization.}}
This section covers setting up libvirt on your system. After the successful completion of this section you will be able to create virtualized guest operating systems.


Any x86_64, or ia64 CPU is supported for running para-virtualized guests with Xen. For i386 hardware, a CPU with the PAE extension is required. Many older laptops (particularly those based on Pentium Mobile / Centrino) do not have PAE support. To determine if a CPU has PAE support, run the following command:
=== System requirements ===


<pre>
The common system requirements for virtualization on Fedora are:
grep pae /proc/cpuinfo
* At least 600MB of hard disk storage per guest. A minimal command-line Fedora system requires 600MB of storage. Standard fedora desktop guests require at least 3GB of space.
flags          : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts
* At least 256MB of RAM per guest plus 256 for the base OS. At least 756MB is recommended for each guest of a modern operating system. A good rule of thumb is to think about how much memory is required for the operating system normally and allocate that much to the virtualized guest.
</pre>
 
The above output shows a CPU that does have PAE support. If the command returns nothing, then the CPU does not have PAE support.
 
==== Fully-virtualized guests (HVM/Intel-VT/AMD-V) ====


To run fully virtualized guests in Xen or KVM, host CPU support is needed. This is typically referred to as Intel VT, or AMD-V. To check for Intel VT support look for the 'vmx' flag, or for AMD-V support check for 'svm' flag:
KVM requires a CPU with virtualization extensions, found on most consumer CPUs made in the past couple years. These extensions are called Intel VT or AMD-V. To check whether you have proper CPU support, run the command:


<pre>
<pre>$ egrep '^flags.*(vmx|svm)' /proc/cpuinfo </pre>
....For Intel....
grep vmx /proc/cpuinfo
flags          : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm


....For AMD....
If NOTHING is printed, your system does not support the relevant extensions. You can still use the QEMU/KVM, but the emulator will fall back to software virtualization, which is FAR FAR slower.
grep svm /proc/cpuinfo
flags          : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy
</pre>


{{Admon/warning | On some Intel systems, virtualization support needs to be enabled in the BIOS. Please ensure that this has been done before proceeding.}}
=== Installing the virtualization packages ===


=== Installing the Virtualization Software ===
When installing Fedora, the virtualization packages can be installed by selecting '''Virtualization''' in the Base Group in the installer. (This may [http://docs.fedoraproject.org/en-US/Fedora/13/html/Installation_Guide/s1-pkgselection-x86.html no longer apply to your installation method] though).


When doing a fresh install of Fedora, the virtualization packages can be installed by selecting '''Virtualization''' in the Base Group in the installer.
For existing Fedora installations, QEMU, KVM, and other virtualization tools can be installed by running the following command which installs the virtualization group:


For an existing Fedora installation, the Xen kernel, KVM, and other virtualization tools can be installed by running the following command:


== Fedora 22 to current: ==
For Fedora 21 or previous installations, replace "dnf" with "yum." Yum is now a deprecated package manager and is replaced by DNF on installations of Fedora 22 and onward.
<pre>
<pre>
su -c "yum groupinstall 'Virtualization'"
su -c "dnf install @virtualization"
</pre>
</pre>


Enter the <code>root</code> password when prompted.
This will install below Mandatory and Default packages.
<pre>
$ dnf groupinfo virtualization


This installs <code>python-virtinst</code> <code>kvm</code>, <code>qemu</code>, and <code>virt-manager</code>. Optional packages in this group are <code>xen</code>, <code>gnome-applet-vm</code>.
Group: Virtualisation
Group-Id: virtualization
Description: These packages provide a virtualisation environment.
Mandatory Packages:
  =virt-install
Default Packages:
  =libvirt-daemon-config-network
  =libvirt-daemon-kvm
  =qemu-kvm
  =virt-manager
  =virt-viewer
Optional Packages:
  guestfs-browser
  libguestfs-tools
  python-libguestfs
  virt-top
</pre>


If <code>kernel-xen</code> is installed, there will be an entry in the file <code>/boot/grub/grub.conf</code> for booting the <code>xen</code> kernel. The <code>xen</code> kernel is '''not''' set as the default boot option.
This will install Mandatory, Default and Optional Packages.
 
<pre>
To set GRUB to boot with <code>kernel-xen</code> by default, edit <code>/boot/grub/grub.conf</code> and set the default to the xen [[FootNote(Note that future kernel-xen packages can be set to the default kernel by editing <code>/etc/sysconfig/kernel</code>)] 
su -c "dnf group install with-optional virtualization"
 
</pre>
This is an example <code>/boot/grub/grub.conf</code> configured to boot into the Xen hypervisor:


To start the service:
<pre>
<pre>
default=0
su -c "systemctl start libvirtd"
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Fedora (2.6.21-2950.fc8xen)
root (hd0,0)
kernel /xen.gz-2.6.21-2950.fc8
module /vmlinuz-2.6.21-2950.fc8xen ro root=/dev/vg_system/lv_root rhgb quiet
module /initrd-2.6.21-2950.fc8xen.img
title Fedora (2.6.23.1-49.fc8)
root (hd0,0)
kernel /vmlinuz-2.6.23.1-49.fc8 ro root=/dev/vg_system/lv_root rhgb quiet
initrd /initrd-2.6.23.1-49.fc8.img
</pre>
</pre>


{{Admon/warning | Booting into the Xen kernel is not necessary to work with KVM virtualization. It is only needed for Xen virtualiation.}}
To start the service on boot:
 
=== Verify Virtualization is Enabled ===
 
Fedora 8 supports multiple underlying Virtualization platforms. Verifying that Virtualization is enabled depends on which platform is being used. For example, when using KVM the command to display all domains on the local system would be: <code>virsh -c qemu:///system list</code>. When using Xen,the command would be <code>virsh -c xen:///system list</code>.
 
To verify that Virtualization is enabled on the system, run the following command, where <URI> is a valid URI that <code>libvirt</code> can recognize. For more details on URIs: see http://libvirt.org/uri.html.
 
<pre>
<pre>
su -c "virsh -c <URI> list"
su -c "systemctl enable libvirtd"
Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                          0      610    1 r----- 12492.1
</pre>
</pre>


The above output indicates that there is an active hypervisor. If Virtualization is not enabled an error similar to the following will appear:
Verify that the kvm kernel modules were properly loaded:


<pre>
<pre>
su -c "virsh -c <URI> list"
$ lsmod | grep kvm
libvir: error : operation failed: xenProxyOpen
kvm_amd                55563  0
error: failed to connect to the hypervisor
kvm                  419458  1 kvm_amd
error: no valid connection
</pre>
</pre>


If the above error appears, make sure that:
If that command did not list kvm_intel or kvm_amd, KVM is not properly configured. See [[How_to_debug_Virtualization_problems#Ensuring_system_is_KVM_capable| Ensuring system is KVM capable]] for troubleshooting tips.
* For Xen, make sure the system is running the Xen kernel and that <code>xend</code> is running
* For KVM, make sure that <code>libvirtd</code> is running
* For either, make sure the URI is properly specified (see http://libvirt.org/uri.html for details)
 
 
{{Admon/warning | Note that for the default setup, networking for the guest OS (DomU) is bridged.  This means that DomU's will get an IP address on the same network as Dom0. If a DHCP server provides addresses, it needs to be configured to give addresses to the guests.  Another networking type can be selected by editing <code>/etc/xen/xend-config.sxp</code>}}
 
=== Configuring Remote Management ===
 
Fedora 8 adds the ability to manage virtual domains in a secure manner from remote hosts. To use these features, choose one of the following methods for the remote host to communicate with the hypervisor:
* Create SSH keys for root, and use <code>ssh-agent</code> and <code>ssh-add</code> before launching <code>virt-manager</code>.
* Set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://libvirt.org/remote.html.
 
=== Building a Fedora Guest System ===
 
With Fedora 8, installation of Fedora 8 guests using anaconda is supported. The installation can be started on the command line via the <code>virt-install</code> program or in the GUI program <code>virt-manager</code>.


{{Admon/warning | If the system is booted into the Xen kernel, virt-manager will use Xen as the underlying platform. If the system is booted into the regular kernel, KVM will be used. To activate KVM when <code>virt-manager</code> is running in QEMU mode, check the "Use hardware acceleration" box in the wizard.}}
=== Networking Support ===


==== Building a Fedora Guest System using <code>virt-install</code> ====
By default libvirt will create a private network for your guests on the host machine. This private network will use a 192.168.x.x subnet and not be reachable directly from the network the host machine is on, but virtual guests can use the host machine as a gateway and can connect out via it. If you need to provide services on your guests that are reachable via other machines on your host network you can use iptables DNAT rules to forward in specific ports, or you can setup a Bridged env.


Start the interactive install process by running the <code>virt-install</code> command-line program:
See the [http://wiki.libvirt.org/page/Networking libvirt networking setup page] for more information on how to setup a Bridged network.


<pre>
=== Creating a Fedora guest ===
su -c "/usr/sbin/virt-install"
</pre>


Enter the <code>root</code> password when prompted.
The installation of Fedora guests using anaconda is supported. The installation can be started on the command line via the <code>virt-install</code> program or in the GUI program <code>virt-manager</code>.  


The following questions about the new guest OS will be presented. This information can also be passed as command line options; run with an argument of <code>--help</code> for more details.  In particular, kickstart options can be passed with <code>-x ks=options</code>.
==== Creating a guest with virt-install ====


# What is the name of your virtual machine?  This is the label that will identify the guest OS. This label will be used for various <code>virsh</code> commands and also appear in <code>virt-manager</code> the Gnome-panel Xen applet.
<code>virt-install</code> is a command line based tool for creating virtualized guestsRefer to http://virt-tools.org/learning/install-with-command-line/ for understanding how to use this tool. Execute <code>virt-install --help</code> for command line help.
# How much RAM should be allocated (in megabytes)?  This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256).  Note that installation with less than 256 megabytes is not recommended.
# What would you like to use as the disk (path)?  The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1).  This will be exported as a full disk to your guest.
# How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist).  4.0 gigabytes is a reasonable size for a "default" install
# Would you like to enable graphics support (yes or no): Should the graphical installer be used?
# What is the install location?  This is the path to a Fedora 8 installation tree in the format used by anacondaNFS, FTP, and HTTP locations are all supported.  Examples include:
#* <code>nfs:my.nfs.server.com:/path/to/test2/tree/</code>
# <code>http://my.http.server.com/path/to/tree/</code>
#* <code>ftp://my.ftp.server.com/path/to/tree</code>


{{Admon/note | Installation must be a network type. It is not possible to install from a local disk or CDROM. It is possible, however, to set up an installation tree on the host OS and then export it as an NFS share.}}
<code>virt-install</code> can use kickstart files, for example
<code>virt-install -x ks=kickstart-file-name.ks</code>.


The installation will then commence. If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, the standard text installer will appear. Proceed as normal with the installation.
If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, a text installer will appear. Proceed with the fedora installation.


==== Building a Fedora Guest System using <code>virt-manager</code> ====
==== Creating a guest with virt-manager ====


Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command as root:
Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command:
<pre>
<pre>
su -c "virt-manager"
su -c "virt-manager"
</pre>
</pre>


Enter the <code>root</code> password when prompted.
If you encounter an error along the lines of "Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash", trying running <code>virt-manager</code> not as root (without the <code>su -c</code>).  The GUI will prompt for the root password.
 


# Open a connection to a hypervisor by choosing File-->Open connection...
# Open a connection to a hypervisor by choosing File-->Add connection...
# Choose "qemu" for KVM, or "Xen" for Xen.
# Choose "qemu" for KVM, or "Xen" for Xen.
# Choose "local" or select a method to connect to a remote hypervisor
# Choose "local" or select a method to connect to a remote hypervisor
Line 171: Line 126:
# On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.
# On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.


==== Building a Fedora Guest System using 'cobbler' and 'koan' ====
=== Remote management ===
 
Cobbler is a tool for configuring a provisioning server for PXE, Xen, and existing systems. See http://cobbler.et.redhat.com for details.  The following instructions are rather minimal and more configuration options are available.
 
First, set up a provisioning server:
 
<pre>
su -c "yum install cobbler"
man cobbler # read the docs!
cobbler check # validate that the system is configured correctly
cobbler distro add --name=myxendistro --kernel=/path/to/vmlinuz --initrd=/path/to/initrd.img
cobbler profile add --name=myxenprofile --distro==myxendistro [--kickstart=/path/to/kickstart]
cobbler list # review the configuration
cobbler sync # apply the configuration to the filesystem
</pre>
 
Alternatively, cobbler can import a Fedora rsync mirror and create profiles automatically from there.  Some of the imported distros will
be Xen profiles and some will be for bare metal.  Usage of the Xen profiles will be required.  See the manpage for details.
 
<pre>
cobbler import --mirror=rsync://your-fedora-mirror --mirror-name=fedora
cobbler sync
</pre>
 
On the system that will host the image:


<pre>
The following remote management options are available:
su -c "yum install koan"
* (easiest) If using non-root users via SSH, then setup instructions are at: http://wiki.libvirt.org/page/SSHSetup
koan --virt --profile=myxenprofile --server=hostname-of-cobbler-server
* If using root for access via SSH, then create SSH keys for root, and use <code>ssh-agent</code> and <code>ssh-add</code> before launching <code>virt-manager</code>.
</pre>
* To use TLS, set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://wiki.libvirt.org/page/TLSSetup.


=== After Installation ===
=== Guest system administration ===


When the installation of the guest operating system is complete, it can be managed using the GUI <code>virt-manager</code> program or on the command line using <code>virsh</code>.
When the installation of the guest operating system is complete, it can be managed using the GUI <code>virt-manager</code> program or on the command line using <code>virsh</code>.


==== Managing Virtual Machines graphically with <code>virt-manager</code> ====
==== Managing guests with virt-manager ====
 
Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menue, or by running the following command:


Start the Virtual Machine Manager. Virtual Machine Manager is in the "Applications-->System Tools" menu, or execute:
<pre>
<pre>
virt-manager
su -c "virt-manager"
</pre>
</pre>


{1} If you are not root, you will be prompted to enter the root password. Choose<code>Run unprivileged</code> to operate in a read-only non-root mode.
{1} If you are not root, you will be prompted to enter the root password. Choose<code>Run unprivileged</code> to operate in a read-only non-root mode.


* Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
* Choose the host you wish to manage and click "Connect" in the "Open Connection" dialog window.
* The list of virtual machines is displayed in the main window. The first machine is called "Domain 0"; this is the host computer.
* The list of virtual machines is displayed in the main window. Guests that are running will display a ">" icon. Guests that are not running will be greyed out.
* If a machine is not listed, it is probably not running. To start up a machine select "File-->Restore a saved machine..." and select the file that serves as the guest's disk.
* To manage a particular guest, double click on it, or right click and select "Open".
* The display lists the status, CPU and memory usage for each machine. Additional statistics can be selected under the "View" menu.
* A new window for the guest will open that will allow you to use its console, see information about its virtual hardware and start/stop/pause it.
* Double click the name of a machine to open the virtual console.
* From the virtual console, select "View-->Details" to access the machine's properties and change its hardware configuration
* To access the serial console (if there is a problem with the graphical console) select "View-->Serial Console"


For further information about <code>virt-manager</code> consult the [http://virt-manager.et.redhat.com/ project website]  
For further information about <code>virt-manager</code> consult the [http://virt-manager.et.redhat.com/ project website]  
Line 228: Line 155:
Bugs in the <code>virt-manager</code> tool should be reported in [http://bugzilla.redhat.com BugZilla]  against the 'virt-manager' component
Bugs in the <code>virt-manager</code> tool should be reported in [http://bugzilla.redhat.com BugZilla]  against the 'virt-manager' component


==== Managing Virtual Machines from the command line with <code>virsh</code> ====
==== Managing guests with virsh ====


Virtual machines can be managed on the command line with the <code>virsh</code> utility. The <code>virsh</code> utility is built around the libvirt management API and has a number of advantages over the traditional Xen <code>xm</code> tool:
The <code>virsh</code> command line utility that allows you to manage virtual machines.
Guests can be managed on the command line with the <code>virsh</code> utility. The <code>virsh</code> utility is built around the libvirt management APIl:


* <code>virsh</code> has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.
* <code>virsh</code> has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.
* <code>virsh</code> can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).
* <code>virsh</code> can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).
* <code>virsh</code> can manage domains running under Xen or KVM with no perceptible difference to the user
* <code>virsh</code> can manage domains running under Xen, Qemu/KVM, esx or other backends with no perceptible difference to the user
 


{{Admon/warning | A valid URI must be passed to <code>virsh</code>. For details, see http://libvirt.org/uri.html}}
{{Admon/note | A valid URI may be passed to <code>virsh</code> with "-c' to connect to a remote libvirtd instance. For details, see http://libvirt.org/uri.html}}


To start a virtual machine:
To start a virtual machine:


<pre>
<pre>
su -c "virsh -c <URI> create <name of virtual machine>"
su -c "virsh create <name of virtual machine>"
</pre>
</pre>


Line 248: Line 175:


<pre>
<pre>
su -c "virsh -c <URI> list"
su -c "virsh list"
</pre>
 
To list all virtual machines, running or not:
 
<pre>
su -c "virsh list --all"
</pre>
</pre>


To gracefully power off a guest:
To gracefully power off a guest:
<pre>
<pre>
su -c "virsh -c <URI> shutdown <virtual machine (name | id | uuid)>"
su -c "virsh shutdown <virtual machine (name | id | uuid)>"
</pre>
 
To non gracefully power off a guest:
<pre>
su -c "virsh destroy <virtual machine (name | id | uuid)>"
</pre>
</pre>


To save a snapshot of the machine to a file:
To save a snapshot of the machine to a file:
<pre>
<pre>
su -c "virsh -c <URI> save <virtual machine (name | id | uuid)> <filename>"
su -c "virsh save <virtual machine (name | id | uuid)> <filename>"
</pre>
</pre>


To restore a previously saved snapshot:
To restore a previously saved snapshot:
<pre>
<pre>
su -c "virsh -c <URI> restore <filename>"
su -c "virsh restore <filename>"
</pre>
</pre>


To export the configuration file of a virtual machine:
To export the configuration file of a virtual machine:
<pre>
<pre>
su -c "virsh -c <URI> dumpxml <virtual machine (name | id | uuid)"
su -c "virsh dumpxml <virtual machine (name | id | uuid)"
</pre>
</pre>


Line 280: Line 218:
Bugs in the <code>virsh</code> tool should be reported in [http://bugzilla.redhat.com BugZilla]  against the 'libvirt' component.
Bugs in the <code>virsh</code> tool should be reported in [http://bugzilla.redhat.com BugZilla]  against the 'libvirt' component.


==== Managing Virtual Machines from the command line with <code>qemu-kvm</code> ====
== Other virtualization options ==
 
KVM virtual machines can also be managed in the command line using the 'qemu-kvm' command. See <code>man qemu-kvm</code> for more details.
 
== Troubleshooting ==
 
=== SELinux ===
 
The SELinux policy in Fedora 8 has the neccessary rules to allow use of Xen with SELinux enabled. The main caveat to be aware of is that any file backed disk images need to be in a special directory - /var/lib/xen/images.  This applies both to regular disk images, and ISO images.  Block device backed disks are already labelled correctly to allow them to pass SELinux checks.
 
=== Log files ===
 
There are two log files stored on the host system to assist with debugging Xen related problems. The file <code>/var/log/xen/xend.log</code> holds the same information reported with '<code>xm log</code>. Unfortunately these log messages are often very short and contain little useful information. The following is the output of trying to create a domain running the kernel for NetBSD/xen.
 
<pre>
[2005-06-27 02:23:02 xend]  ERROR (SrvBase:163) op=create: Error creating domain:(0, 'Error')
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvBase.py", line 107, in _perform
val = op_method(op, req)
File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDomainDir.py", line 71, in op_create
raise XendError("Error creating domain: " + str(ex))
XendError: Error creating domain: (0, 'Error')
</pre>
 
The second file, <code>/var/log/xen/xend-debug.log</code> usually contains much more detailed information. Trying to start the NetBSD/xen kernel will result in the following log output:
 
<pre>
ERROR: Will only load images built for Xen v3.0
ERROR: Actually saw: 'GUEST_OS=netbsd,GUEST_VER=2.0,XEN_VER=2.0,LOADER=generic,BSD_SYMTAB'
ERROR: Error constructing guest OS
</pre>
 
When reporting errors, always include the output from both <code>/var/log/xen/xend.log</code> and <code>/var/log/xen/xend-debug.log</code>.
 
If starting a fully-virtualized domains (ie to run unmodified OS) there are also logs in /var/log/xen/qemu-dm*.log which can contain useful information.
 
Finally, hypervisor logs can be seen by running the command
 
<pre>
xm dmesg
</pre>
 
=== Serial Console ===
 
==== Host serial console access ====
 
For more difficult problems, serial console can be very helpful.  If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host.  Serial console lets you capture it on a remote host.
 
The Xen host must be setup for serial console output, and a remote host must exist to capture it.  For the console output, set the appropriate options in /etc/grub.conf:
 
<pre>
title Fedora Core (2.6.17-1.2600.fc6xen)
root (hd0,2)
kernel /xen.gz-2.6.17-1.2600.fc6 com1=38400,8n1 sync_console
module /vmlinuz-2.6.17-1.2600.fc6xen ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off
module /initrd-2.6.17-1.2600.fc6xen.img
</pre>
 
for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.)  The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console.  "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console.  Once that is done, install and set up <code>ttywatch</code> to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote host:
 
<pre>
su -c "ttywatch --name myhost  --port /dev/ttyS0"
</pre>
 
Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log
 
==== Paravirt guest serial console access ====
 
Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using
 
<pre>
su -c "virsh console &lt;domain name&gt;"
</pre>
 
Alternatively, the graphical <code>virt-manager</code> program can display the serial console. Simply display the 'console' or 'details' window for the guest and select 'View -> Serial console' from the menu bar.
 
==== Full Virt guest serial console access ====
 
Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:
 
<pre>
su -c "virsh console &lt;domain name&gt;"
</pre>
 
Alternatively, the graphical <code>virt-manager</code> program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.
 
=== Accessing data on a guest disk image ===
 
There are two tools which can help greatly in accessing data within a guest disk image: ''lomount'' and ''kpartx''.
 
{{Admon/warning | Remember never to do this while the guest is up and running, as it could corrupt the filesystem}}
 
* '''lomount'''
 
<pre>
su -c "lomount -t ext3 -diskimage /xen/images/fc5-file.img -partition 1 /mnt/boot"
</pre>
 
lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the ''device-mapper-multipath'' RPM) is preferred:
 
* '''kpartx'''
 
<pre>
su -c "yum install device-mapper-multipath"
su -c "kpartx -av /dev/xen/guest1"
add map guest1p1 : 0 208782 linear /dev/xen/guest1 63
add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845
</pre>
 
Note that this only works for block devices, not for images installed on regular files.  To use file images, set up a loopback device for the file first:
 
<pre>
su -c "losetup -f"
/dev/loop0
su -c "losetup /dev/loop0 /xen/images/fc5-file.img"
su -c "kpartx -av /dev/loop0"
add map loop0p1 : 0 208782 linear /dev/loop0 63
add map loop0p2 : 0 12370050 linear /dev/loop0 208845
</pre>
 
In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else.  They are accessible under /dev/mapper:
 
<pre>
su -c "ls -l /dev/mapper/ | grep guest1"
brw-rw---- 1 root disk 253,  6 Jun  6 10:32 xen-guest1
brw-rw---- 1 root disk 253, 14 Jun  6 11:13 guest1p1
brw-rw---- 1 root disk 253, 15 Jun  6 11:13 guest1p2
su -c "mount /dev/mapper/guest1p1 /mnt/boot/"
</pre>
 
To access LVM volumes on the second partition, rescan LVM with <code>vgscan</code> and activate the volume group on that partition (named "VolGroup00" by default) with <code>vgchange -ay</code>:
 
<pre>
su -c "kpartx -a /dev/xen/guest1"
su -c "vgscan"
Reading all physical volumes.  This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
su -c "vgchange -ay VolGroup00"
2 logical volume(s) in volume group "VolGroup00" now active
su -c "lvs"
LV        VG        Attr  LSize  Origin Snap%  Move Log Copy%
LogVol00  VolGroup00 -wi-a-  5.06G
LogVol01  VolGroup00 -wi-a- 800.00M
su -c "mount /dev/VolGroup00/LogVol00 /mnt/"
...
su -c "umount /mnt"
su -c "vgchange -an VolGroup00"
su -c "kpartx -d /dev/xen/guest1"
</pre>
 
{{Admon/warning | Note: '''always''' remember to deactivate the logical volumes with "vgchange -an", remove the partitions with "kpartx -d", and (if appropriate) delete the loop device with "losetup -d" after performing the above steps. The default volume group name for a Fedora install is always the same, it is important to avoid activating two volume group of the same name at the same time.  LVM will cope as best it can, but it is not possible to distinguish between these two groups on the command line. In addition, if the volume group is active on the host and the guest at the same time, it can cause filesystem corruption.}}
 
=== Frequently Asked Questions ===
 
* Q: I am trying to start the xend service and nothing happens, then when I do a <code>virsh list</code> I get the following:
 
<pre>
Error: Error connecting to xend: Connection refused.  Is xend running?
</pre>
 
Alternatively, I run <code>xend start</code> manually and get the following error:
 
<pre>
ERROR: Could not obtain handle on privileged command interface (2 = No such file or directory)
Traceback (most recent call last):
File "/usr/sbin/xend", line 33, in ?
from xen.xend.server import SrvDaemon
File "/usr/lib/python2.4/site-packages/xen/xend/server/SrvDaemon.py", line 21, in ?
import relocate
File "/usr/lib/python2.4/site-packages/xen/xend/server/relocate.py", line 26, in ?
from xen.xend import XendDomain
File "/usr/lib/python2.4/site-packages/xen/xend/XendDomain.py", line 33, in ?
import XendDomainInfo
File "/usr/lib/python2.4/site-packages/xen/xend/XendDomainInfo.py", line 37, in ?
import image
File "/usr/lib/python2.4/site-packages/xen/xend/image.py", line 30, in ?
xc = xen.lowlevel.xc.xc()
RuntimeError: (2, 'No such file or directory')
</pre>
 
A: You have rebooted your host into a kernel that is not a xen-hypervisor kernel.  Yes I did this myself in testing :)
 
You either need to select the xen-hypervisor kernel at boot time or set the  xen-hypervisor kernel as default in your grub.conf file.
 
* Q. When creating a guest the message "Invalid argument" is displayed.
 
A. This usually indicates that the kernel image you are trying to boot is incompatible with the hypervisor. This will be seen if trying to run a FC5 (non-PAE) kernel on FC6 (which is PAE only), or if trying to run a bare metal kernel.


* Q. When I do a yum update and get a new kernel, the grub.conf default kernel switches back to the bare-metal kernel instead of the Xen kernel
=== QEMU/KVM without Libvirt ===
QEMU/KVM can be invoked directly without libvirt, however you won't be able to use tools such as virt-manager, virt-install, or virsh.
Plain QEMU (without KVM) can also virtualize other processor architectures like ARM or PowerPC. See [[How to use qemu]]


A. The default kernel RPM can be changed in /etc/sysconfig/kernel. If it is set to 'kernel-xen', then the Xenified kernel will always be set as default option in grub.conf
=== Xen ===
Fedora can run as a Xen Guest OS and also be used as a Xen host (with the latter being true from Fedora 16; for using an earlier version of Fedora as a Xen Host, check out the experimental repo available at http://myoung.fedorapeople.org/dom0). For a guide on how to install and setup a Fedora Xen host, look at the [http://wiki.xen.org/wiki/Fedora_Host_Installation Fedora Host Installation] page on the Xen Project wiki.


=== Getting Help ===
=== OpenStack ===
[[OpenStack]] consists of a number services for running IaaS clouds. They are the Object Store (Swift), Compute (Nova) and Image (Glance) services. It is a [[Features/OpenStack |Fedora 16 feature]].


If the Troubleshooting section above does not help you to solve your problem, check the [https://bugzilla.redhat.com/ Red Hat Bugzilla] for existing bug reports on Xen in FC6. The product is "Fedora Core", and the component is "kernel" for bugs related to the xen kernel and "xen" for bugs related to the tools. These reports contain useful advice from fellow xen testers and often describe work-arounds.
=== OpenNebula ===
[[Features/OpenNebula |OpenNebula]] is an Open Source Toolkit for Data Center Virtualization.


For general Xen issues and useful information check the Xen project [http://www.cl.cam.ac.uk/Research/SRG/netos/xen/documentation.html documentation] , and [http://lists.xensource.com/ mailing list archives] .
=== oVirt ===
The [[Features/oVirt |oVirt project]] is an open virtualization project providing a feature-rich, end to end, server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more.


Finally, discussion on Fedora Xen support issues occur on the [http://www.redhat.com/mailman/listinfo/fedora-xen Fedora Xen mailing list]


=== References ===
== Troubleshooting, bug reporting, and known issues ==


* http://www-128.ibm.com/developerworks/linux/library/l-linux-kvm/?ca=dgr-lnxw07LinuxKVM
For a list of known unresolved issues, as well as troubleshooting tips, please see [[How_to_debug_Virtualization_problems|How to debug virtualization problems]]
* http://kerneltrap.org/node/8088


=== Footnotes ===
[[Category:Documentation]]
[[Category:Virtualization]]

Revision as of 23:52, 22 June 2016

This page deals with using Fedora to host virtual guests. For information the different virtualization technologies available in Fedora, see the dedicated page.


Using virtualization on Fedora

Fedora uses the libvirt family of tools as its virtualization solution. By default libvirt on Fedora will use Qemu to run guest instances.

For information on other virtualization platforms, refer to http://virt.kernelnewbies.org/TechComparison.

Qemu can emulate a host machine in software, or given a CPU with hardware support (see below) can use KVM to provide a fast full virtualization.

Other virtualization products and packages are available but are not covered by this guide.

Installing and configuring Fedora For virtualized guests

This section covers setting up libvirt on your system. After the successful completion of this section you will be able to create virtualized guest operating systems.

System requirements

The common system requirements for virtualization on Fedora are:

  • At least 600MB of hard disk storage per guest. A minimal command-line Fedora system requires 600MB of storage. Standard fedora desktop guests require at least 3GB of space.
  • At least 256MB of RAM per guest plus 256 for the base OS. At least 756MB is recommended for each guest of a modern operating system. A good rule of thumb is to think about how much memory is required for the operating system normally and allocate that much to the virtualized guest.

KVM requires a CPU with virtualization extensions, found on most consumer CPUs made in the past couple years. These extensions are called Intel VT or AMD-V. To check whether you have proper CPU support, run the command:

$ egrep '^flags.*(vmx|svm)' /proc/cpuinfo 

If NOTHING is printed, your system does not support the relevant extensions. You can still use the QEMU/KVM, but the emulator will fall back to software virtualization, which is FAR FAR slower.

Installing the virtualization packages

When installing Fedora, the virtualization packages can be installed by selecting Virtualization in the Base Group in the installer. (This may no longer apply to your installation method though).

For existing Fedora installations, QEMU, KVM, and other virtualization tools can be installed by running the following command which installs the virtualization group:


Fedora 22 to current:

For Fedora 21 or previous installations, replace "dnf" with "yum." Yum is now a deprecated package manager and is replaced by DNF on installations of Fedora 22 and onward.

su -c "dnf install @virtualization"

This will install below Mandatory and Default packages.

$ dnf groupinfo virtualization

Group: Virtualisation
 Group-Id: virtualization
 Description: These packages provide a virtualisation environment.
 Mandatory Packages:
   =virt-install
 Default Packages:
   =libvirt-daemon-config-network
   =libvirt-daemon-kvm
   =qemu-kvm
   =virt-manager
   =virt-viewer
 Optional Packages:
   guestfs-browser
   libguestfs-tools
   python-libguestfs
   virt-top

This will install Mandatory, Default and Optional Packages.

su -c "dnf group install with-optional virtualization"

To start the service:

su -c "systemctl start libvirtd"

To start the service on boot:

su -c "systemctl enable libvirtd"

Verify that the kvm kernel modules were properly loaded:

$ lsmod | grep kvm
kvm_amd                55563  0 
kvm                   419458  1 kvm_amd

If that command did not list kvm_intel or kvm_amd, KVM is not properly configured. See Ensuring system is KVM capable for troubleshooting tips.

Networking Support

By default libvirt will create a private network for your guests on the host machine. This private network will use a 192.168.x.x subnet and not be reachable directly from the network the host machine is on, but virtual guests can use the host machine as a gateway and can connect out via it. If you need to provide services on your guests that are reachable via other machines on your host network you can use iptables DNAT rules to forward in specific ports, or you can setup a Bridged env.

See the libvirt networking setup page for more information on how to setup a Bridged network.

Creating a Fedora guest

The installation of Fedora guests using anaconda is supported. The installation can be started on the command line via the virt-install program or in the GUI program virt-manager.

Creating a guest with virt-install

virt-install is a command line based tool for creating virtualized guests. Refer to http://virt-tools.org/learning/install-with-command-line/ for understanding how to use this tool. Execute virt-install --help for command line help.

virt-install can use kickstart files, for example virt-install -x ks=kickstart-file-name.ks.

If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, a text installer will appear. Proceed with the fedora installation.

Creating a guest with virt-manager

Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command:

su -c "virt-manager"

If you encounter an error along the lines of "Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash", trying running virt-manager not as root (without the su -c). The GUI will prompt for the root password.


  1. Open a connection to a hypervisor by choosing File-->Add connection...
  2. Choose "qemu" for KVM, or "Xen" for Xen.
  3. Choose "local" or select a method to connect to a remote hypervisor
  4. After a connection is opened, click the new icon next to the hypervisor, or right click on the active hypervisor and select "New" (Note - the new icon is going to be improved to make it easier to see)
  5. A wizard will present the same questions as appear with the virt-install command-line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option.
  6. On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.

Remote management

The following remote management options are available:

  • (easiest) If using non-root users via SSH, then setup instructions are at: http://wiki.libvirt.org/page/SSHSetup
  • If using root for access via SSH, then create SSH keys for root, and use ssh-agent and ssh-add before launching virt-manager.
  • To use TLS, set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://wiki.libvirt.org/page/TLSSetup.

Guest system administration

When the installation of the guest operating system is complete, it can be managed using the GUI virt-manager program or on the command line using virsh.

Managing guests with virt-manager

Start the Virtual Machine Manager. Virtual Machine Manager is in the "Applications-->System Tools" menu, or execute:

su -c "virt-manager"

{1} If you are not root, you will be prompted to enter the root password. ChooseRun unprivileged to operate in a read-only non-root mode.

  • Choose the host you wish to manage and click "Connect" in the "Open Connection" dialog window.
  • The list of virtual machines is displayed in the main window. Guests that are running will display a ">" icon. Guests that are not running will be greyed out.
  • To manage a particular guest, double click on it, or right click and select "Open".
  • A new window for the guest will open that will allow you to use its console, see information about its virtual hardware and start/stop/pause it.

For further information about virt-manager consult the project website

Bugs in the virt-manager tool should be reported in BugZilla against the 'virt-manager' component

Managing guests with virsh

The virsh command line utility that allows you to manage virtual machines. Guests can be managed on the command line with the virsh utility. The virsh utility is built around the libvirt management APIl:

  • virsh has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.
  • virsh can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).
  • virsh can manage domains running under Xen, Qemu/KVM, esx or other backends with no perceptible difference to the user
Note.png
A valid URI may be passed to virsh with "-c' to connect to a remote libvirtd instance. For details, see http://libvirt.org/uri.html

To start a virtual machine:

su -c "virsh create <name of virtual machine>"

To list the virtual machines currently running:

su -c "virsh list"

To list all virtual machines, running or not:

su -c "virsh list --all"

To gracefully power off a guest:

su -c "virsh shutdown <virtual machine (name | id | uuid)>"

To non gracefully power off a guest:

su -c "virsh destroy <virtual machine (name | id | uuid)>"

To save a snapshot of the machine to a file:

su -c "virsh save <virtual machine (name | id | uuid)> <filename>"

To restore a previously saved snapshot:

su -c "virsh restore <filename>"

To export the configuration file of a virtual machine:

su -c "virsh dumpxml <virtual machine (name | id | uuid)"

For a complete list of commands available for use with virsh:

su -c "virsh help"

Or consult the manual page: man 1 virsh

Bugs in the virsh tool should be reported in BugZilla against the 'libvirt' component.

Other virtualization options

QEMU/KVM without Libvirt

QEMU/KVM can be invoked directly without libvirt, however you won't be able to use tools such as virt-manager, virt-install, or virsh. Plain QEMU (without KVM) can also virtualize other processor architectures like ARM or PowerPC. See How to use qemu

Xen

Fedora can run as a Xen Guest OS and also be used as a Xen host (with the latter being true from Fedora 16; for using an earlier version of Fedora as a Xen Host, check out the experimental repo available at http://myoung.fedorapeople.org/dom0). For a guide on how to install and setup a Fedora Xen host, look at the Fedora Host Installation page on the Xen Project wiki.

OpenStack

OpenStack consists of a number services for running IaaS clouds. They are the Object Store (Swift), Compute (Nova) and Image (Glance) services. It is a Fedora 16 feature.

OpenNebula

OpenNebula is an Open Source Toolkit for Data Center Virtualization.

oVirt

The oVirt project is an open virtualization project providing a feature-rich, end to end, server virtualization management system with advanced capabilities for hosts and guests, including high availability, live migration, storage management, system scheduler, and more.


Troubleshooting, bug reporting, and known issues

For a list of known unresolved issues, as well as troubleshooting tips, please see How to debug virtualization problems