From Fedora Project Wiki

La virtualizzazione su Fedora

Fedora consente la virtualizzazione con piattaforme KVM e Xen. Per scoprire altre teconologie di virtualizzazione disponibili per Linux in generale, e le relative caratteristiche e prestazioni, fare riferimento al sito TechComparison.

  • Xen supporta guest (o VM) para-virtualizzati come pure guest completamente virtualizzati con driver para-virtualizzati. La para-virtualizzazione è più veloce della virtualizzazione completa ma funziona solo su sistemi operativi Linux con le estensioni Xen nel kernel. Inoltre la piattaforma Xen completamente virtualizzata è più lenta della analoga KVM.
    Per informazioni generali, visitare il sito del progetto Xen; per conoscere lo stato corrente di Fedora a supporto di Xen consultare la nota di rilascio XenPvopsDom0.

Nota: Fedora usa la versione Xen 3.0.x, rilasciata nel Dicembre 2005, ed è incompatibile con guest creati usando le versioni Xen 2.0.x.

  • KVM propone una virtualizzazione completa e veloce, e necessita di hardware dedicato, ossia di un processore in grado di interpretare le istruzioni di virtualizzazione. KVM può funzionare su processori INTEL X_86 o AMD con estensioni alla virtualizzazione. Senza queste estensioni KVM usa il software di virtualizzazione QEMU.
    Per informazioni generali, visitare il sito del progetto KVM.

Preparare il sistema alla virtualizzazione

Questa sezione descrive come impostare Xen, KVM o entrambi sul proprio sistema. Dopo aver completato con successo le indicazioni suggerite, si potranno creare sistemi operativi guest. :-)

Requisiti Hardware

Spazio su disco rigido e memoria RAM

I requisiti minimi di memoria, di massa e volatile, per la virtualizzazione in Fedora sono:

  • Almeno 600MB di spazio su disco rigido per ciascun guest. Un sistema Fedora minimo in modalità text richiede 600MB di spazio. Guest desktop standard di Fedora almeno 3GB di spazio su disco rigido.
  • Almeno 256MB di RAM per ciascun guest + 256MB per il Sistema Operativo (S.O.) base. Si raccomandano circa 756MB, per ogni guest di un moderno S.O. Una buona regola è disporre tanto spazio quanto normalmente richiesto dal S.O. ed allocare tale quantità al guest virtualizzato.

Requisiti di processore per guest para-virtualizzati

Solo per piattaforma Xen, in quanto per il momento, KVM non supporta la para-virtualizzazione.
Processore x86_64 o INTEL Itanium o altro x86 con estensioni PAE. Molti portatili più vecchi (soprattutto quelli basati su Pentium Mobile / Centrino) non hanno supporto PAE. Per determinare se la CPU ha estensioni PAE, in un terminale, eseguire:

$ grep pae /proc/cpuinfo
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 
                  3dnow up ts

L'output indicato individua una CPU con estensioni PAE. Se il comando non ritorna nulla, allora la CPU non supporta la para-virtualizzazione.

Nota: Per versioni v di Fedora, con v < 10 è richiesto il pacchetto kernel-xen.
Per conoscere la situazione di Fedora corrente a supporto di Xen consultare la nota di rilascio XenPvopsDom0 relativa a Fedora 13.

Requisiti di processore per guest virtualizzati

La completa virtualizzazione con tecnologie Xen o KVM necessita di una CPU con estensioni alla virtualizzazione, cioè, le estensioni proprie di INTEL VT o AMD-V. Per verificare se la propria CPU INTEL abbia il supporto INTEL VT (ossia il flag vmx), come sopra:

$ grep vmx /proc/cpuinfo
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 
                  ss ht tm syscall nx lm  constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm  

Su alcuni sistemi INTEL (solitamente portatili) le estensioni INTEL VT sono disabilitate nel BIOS. Entrare nel BIOS ed abilitare INTEL-VT o Vanderpool Technology, generalmente localizzato tra le opzioni delle CPU o nei menu del Chipset.

Per verificare se la propria CPU AMD ha supporto AMD-V (il flag svm):

$ grep svm /proc/cpuinfo
flags           : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
                  nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy

I processori VIA Nano usano l'insieme di istruzioni vmx.

Per ottenere la piena virtualizzazione anche su architetture prive di tale supporto, si può usare l'emulazione via software di QEMU. La virtualizzazione via software, tuttavia, risulterà molto più lenta di quella integrata in hardware tipica delle estensioni INTEL VT o AMD-V. QEMU inoltre può anche virtualizzare altre architetture di CPU, come ARM o PowerPC.

Installare i pacchetti necessari

Per installare i pacchetti relativi alla virtualizzazione, usando l'ambiente grafico di gestione dei pacchetti, selezionare nelle Collezioni di pacchetti, il gruppo Virtualizzazione, oppure usando un terminale

su -c "yum groupinstall 'Virtualization'"

Il processo di installazione provedderà QEMU, KVM ed altri strumenti di virtualizzazione, in particolare:

  • qemu-kvm
  • python-virtinst
  • qemu
  • virt-manager
  • virt-viewer
  • e relative dipendenze.

Pacchetti opzionali sono:

  • gnome-applet-vm
  • virt-top

Introduzione alla virtualizzazione con Fedora

Fedora supports multiple virtualization platforms. Different platforms require slightly different methods.

When using KVM, to display all domains on the local system the command is virsh -c qemu:///system list. When using Xen, the same command is virsh -c xen:///system list. Be aware of this subtle variation.

To verify that virtualization is enabled on the system, run the following command, where <URI> is a valid URI that libvirt can recognize. For more details on URIs: see http://libvirt.org/uri.html.

$ su -c "virsh -c <URI> list"
Name                              ID Mem(MiB) VCPUs State  Time(s)
Domain-0                           0      610     1 r----- 12492.1

The above output indicates that there is an active hypervisor. If virtualization is not enabled an error similar to the following appears:

$ su -c "virsh -c <URI> list"
libvir: error : operation failed: xenProxyOpen
error: failed to connect to the hypervisor
error: no valid connection

If the above error appears, make sure that:

  • For Xen, ensure xend is running.
  • For KVM, ensure libvirtd is running.
  • For either, ensure the URI is properly specified (see http://libvirt.org/uri.html for details).


Note that for the default setup, networking for the guest OS (DomU) is bridged. This means that DomU gets an IP address on the same network as Dom0. If a DHCP server provides addresses, it needs to be configured to give addresses to the guests. Another networking type can be selected by editing /etc/xen/xend-config.sxp

Creating a fedora guest

The installation of Fedora guests using anaconda is supported. The installation can be started on the command line via the virt-install program or in the GUI program virt-manager. You will be prompted for the type of virtualization (that is, KVM or Xen and para-virtualization or full virtualization) used during the guest creation process.

Creating a fedora guest with virt-install

virt-install is a command line based tool for creating virtualized guests. To start the interactive install process, run the virt-install command:

su -c "/usr/sbin/virt-install"

The following questions for the new guest will be presented.

  1. What is the name of your virtual machine? This is the label that will identify the guest OS. This label is used with virsh commands and virt-manager(Virtual Machine Manager).
  2. How much RAM should be allocated (in megabytes)? This is the amount of RAM to be allocated for the guest instance in megabytes (eg, 256). Note that installation with less than 256 megabytes is not recommended.
  3. What would you like to use as the disk (path)? The local path and file name of the file to serve as the disk image for the guest (eg, /home/joe/xenbox1). This will be exported as a full disk to your guest.
  4. How large would you like the disk to be (in gigabytes)? The size of the virtual disk for the guest (only appears if the file specified above does not already exist). 4.0 gigabytes is a reasonable size for a "default" install
  5. Would you like to enable graphics support (yes or no): Should the graphical installer be used?
  6. What is the install location? This is the path to a Fedora installation tree in the format used by anaconda. NFS, FTP, and HTTP locations are all supported. Examples include:

These options can be passed as command line options, execute virt-install --help for details.

virt-install can use kickstart files, for example virt-install -x ks=kickstart-file-name.ks.

If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, a text installer will appear. Proceed with the fedora installation.

Creating a fedora guest with virt-manager

Start the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command:

su -c "virt-manager"

Enter the root password when prompted.

  1. Open a connection to a hypervisor by choosing File-->Open connection...
  2. Choose "qemu" for KVM, or "Xen" for Xen.
  3. Choose "local" or select a method to connect to a remote hypervisor
  4. After a connection is opened, click the new icon next to the hypervisor, or right click on the active hypervisor and select "New" (Note - the new icon is going to be improved to make it easier to see)
  5. A wizard will present the same questions as appear with the virt-install command-line utility (see descriptions above). The wizard assumes that a graphical installation is desired and does not prompt for this option.
  6. On the last page of the wizard there is a "Finish" button. When this is clicked, the guest OS is provisioned. After a few moments a VNC window should appear. Proceed with the installation as normal.

Remote management

The following remote management options are available:

  • Create SSH keys for root, and use ssh-agent and ssh-add before launching virt-manager.
  • Set up a local certificate authority and issue x509 certs to all servers and clients. For information on configuring this option, refer to http://libvirt.org/remote.html.

Guest system administration

When the installation of the guest operating system is complete, it can be managed using the GUI virt-manager program or on the command line using virsh.

Managing guests with virt-manager

Start the Virtual Machine Manager. Virtual Machine Manager is in the "Applications-->System Tools" menu, or execute:

su -c "virt-manager"

{1} If you are not root, you will be prompted to enter the root password. ChooseRun unprivileged to operate in a read-only non-root mode.

  • Choose "Local Xen Host" and click "Connect" in the "Open Connection" dialog window.
  • The list of virtual machines is displayed in the main window. The first machine is called "Domain 0"; this is the host computer.
  • If a machine is not listed, it is probably not running. To start up a machine select "File-->Restore a saved machine..." and select the file that serves as the guest's disk.
  • The display lists the status, CPU and memory usage for each machine. Additional statistics can be selected under the "View" menu.
  • Double click the name of a machine to open the virtual console.
  • From the virtual console, select "View-->Details" to access the machine's properties and change its hardware configuration
  • To access the serial console (if there is a problem with the graphical console) select "View-->Serial Console"

For further information about virt-manager consult the project website

Bugs in the virt-manager tool should be reported in BugZilla against the 'virt-manager' component

Managing guests with virsh

The virsh command is a safe alternative to the xm command. virsh provides error checking and many other useful features over the xm command. Guests can be managed on the command line with the virsh utility. The virsh utility is built around the libvirt management API and has a number of advantages over the traditional Xen xm tool:

  • virsh has a stable set of commands whose syntax and semantics are preserved across updates to the underlying virtualization platform.
  • virsh can be used as an unprivileged user for read-only operations (e.g. listing domains, listing domain statistics).
  • virsh can manage domains running under Xen or KVM with no perceptible difference to the user


A valid URI must be passed to virsh. For details, see http://libvirt.org/uri.html

To start a virtual machine:

su -c "virsh -c <URI> create <name of virtual machine>"

To list the virtual machines currently running:

su -c "virsh -c <URI> list"

To gracefully power off a guest:

su -c "virsh -c <URI> shutdown <virtual machine (name | id | uuid)>"

To save a snapshot of the machine to a file:

su -c "virsh -c <URI> save <virtual machine (name | id | uuid)> <filename>"

To restore a previously saved snapshot:

su -c "virsh -c <URI> restore <filename>"

To export the configuration file of a virtual machine:

su -c "virsh -c <URI> dumpxml <virtual machine (name | id | uuid)"

For a complete list of commands available for use with virsh:

su -c "virsh help"

Or consult the manual page: man 1 virsh

Bugs in the virsh tool should be reported in BugZilla against the 'libvirt' component.

Managing guests with qemu-kvm

KVM virtual machines can also be managed in the command line using the 'qemu-kvm' command. See man qemu for more details.

Troubleshooting virtualization

SELinux

The SELinux policy in Fedora has the necessary rules to allow the use of virtualization. The main caveat to be aware of is that any file backed disk images need to be in the directory /var/lib/libvirt/images. This applies both to regular disk images, and ISO images. Block device backed disks are already labelled correctly to allow them to pass SELinux checks.

Beginning with Fedora 11, virtual machines under SELinux are isolated from each other with sVirt.

Log files

The graphical interface, virt-manager, used to create and manage virtual machines, logs to $HOME/.virt-manager/virt-manager.log.

The virt-install tool, used to create virtual machines, logs to $HOME/.virtinst/virt-install.log

Logging from virt-manager and virt-install may be increased by setting the environment variable LIBVIRT_DEBUG=1. See http://libvirt.org/logging.html

All QEMU command lines executed by libvirt are logged to /var/log/libvirt/qemu/$DOMAIN.log where $DOMAIN is the name of the guest.

The libvirtd daemon is responsible for handling connections from tools such as virsh and virt-manager. The level and type of logging produced by libvirtd may be modified in /etc/libvirt/libvirtd.conf.

There are two log files stored on the host system to assist with debugging Xen related problems. The file /var/log/xen/xend.log holds the same information reported with the 'xm log' command.

The second file, /var/log/xen/xend-debug.log usually contains much more detailed information.

When reporting errors, always include the output from both /var/log/xen/xend.log and /var/log/xen/xend-debug.log .

If starting a fully-virtualized domains (ie unmodified guest OS) there are also logs in /var/log/xen/qemu-dm*.log which can contain useful information.

Xen hypervisor logs can be seen by running the 'xm dmesg' command.

Serial console access for troubleshooting and management

Serial console access is useful for debugging kernel crashes and remote management can be very helpful. Accessing the serial consoles of xen kernels or virtualized guests is slightly different to the normal procedure.

Host serial console access

If the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host.  Serial console lets you capture it on a remote host.

The Xen host must be setup for serial console output, and a remote host must exist to capture it. For the console output, set the appropriate options in /etc/grub.conf:

title Fedora
    root (hd0,1)
    kernel /vmlinuz-current.running.version com1=38400,8n1 sync_console
    module /vmlinuz-current.running.version ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off
    module /initrd-current.running.version

for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.) The "sync_console" works around a problem that can cause hangs with asynchronous hypervisor console output, and the "pnpacpi=off" works around a problem that breaks input on serial console. "console=ttyS0 console=tty" means that kernel errors get logged both on the normal VGA console and on serial console. Once that is done, install and set up ttywatch to capture the information on a remote host connected by a standard null-modem cable. For example, on the remote host:

su -c "ttywatch --name myhost  --port /dev/ttyS0"

Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log

Para-virtualized guest serial console access

Para-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using

su -c "virsh console <domain name>"

Alternatively, the graphical virt-manager program can display the serial console. Simply display the 'console' or 'details' window for the guest and select 'View -> Serial console' from the menu bar.

Fully virtualized guest serial console access

Fully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests:

su -c "virsh console <domain name>"

Alternatively, the graphical virt-manager program can display the serial console. Simply display the 'console' or 'details' window for the guest & select 'View -> Serial console' from the menu bar.

Accessing data on guest disk images

There are two tools which can help greatly in accessing data within a guest disk image: lomount and kpartx.

Remember never to do this while the guest is up and running, as it could corrupt the filesystem
libguestfs
You can also try the experimental libguestfs tools.
  • lomount
su -c "lomount -t ext3 -diskimage /xen/images/fc5-file.img -partition 1 /mnt/boot"

lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the device-mapper-multipath RPM) is preferred:

  • kpartx
su -c "yum install device-mapper-multipath"
su -c "kpartx -av /dev/xen/guest1"
add map guest1p1 : 0 208782 linear /dev/xen/guest1 63
add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845

Note that this only works for block devices, not for images installed on regular files. To use file images, set up a loopback device for the file first:

su -c "losetup -f"
/dev/loop0
su -c "losetup /dev/loop0 /xen/images/fc5-file.img"
su -c "kpartx -av /dev/loop0"
add map loop0p1 : 0 208782 linear /dev/loop0 63
add map loop0p2 : 0 12370050 linear /dev/loop0 208845

In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper:

su -c "ls -l /dev/mapper/ | grep guest1"
brw-rw---- 1 root disk 253,  6 Jun  6 10:32 xen-guest1
brw-rw---- 1 root disk 253, 14 Jun  6 11:13 guest1p1
brw-rw---- 1 root disk 253, 15 Jun  6 11:13 guest1p2
su -c "mount /dev/mapper/guest1p1 /mnt/boot/"

To access LVM volumes on the second partition, rescan LVM with vgscan and activate the volume group on that partition (named "VolGroup00" by default) with vgchange -ay:

su -c "kpartx -a /dev/xen/guest1"
su -c "vgscan"
Reading all physical volumes.  This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
su -c "vgchange -ay VolGroup00"
2 logical volume(s) in volume group "VolGroup00" now active
su -c "lvs"
LV        VG         Attr   LSize   Origin Snap%  Move Log Copy%
LogVol00  VolGroup00 -wi-a-   5.06G
LogVol01  VolGroup00 -wi-a- 800.00M
su -c "mount /dev/VolGroup00/LogVol00 /mnt/"
...
su -c "umount /mnt"
su -c "vgchange -an VolGroup00"
su -c "kpartx -d /dev/xen/guest1"
Note: Always deactivate the logical volumes with "vgchange -an", remove the partitions with "kpartx -d", and (if appropriate) delete the loop device with "losetup -d" after performing the above steps. The default volume group name for a Fedora install is always the same, it is important to avoid activating two volume group of the same name at the same time. LVM will cope as best it can, but it is not possible to distinguish between these two groups on the command line. In addition, if the volume group is active on the host and the guest at the same time, it can cause filesystem corruption.

Getting help

If the Troubleshooting section above does not help you to solve your problem, check the list of existing virtualization bugs, and search the archives of the mailing lists in the resources section. If you believe your problem is a previously undiscovered bug, please report it to Bugzilla.

Resources

  • General virtualization discussion including KVM and QEMU
Fedora fedora-virt mailing list
Fedora fedora-xen mailing list
Xensource xen-users mailing list
Red Hat et-mgmt-tools mailing list
Red Hat libvir-list mailing list

References

Previous Fedora Virtualization Guides:

Fedora7VirtQuickStart

Fedora8VirtQuickStart