How To: Running Fedora-ARM under QEMU
QEMU is a well-known emulator that supports ARM platforms, and can be used to run the Fedora-ARM distribution. This provides a convenient platform to try out the distribution as well as to development and customization.
The howto describes a process to get the Fedora-ARM distribution running under QEMU. Although we have tested this on Fedora 12, most of the process should work on any other Linux system as well. We assumes that you can run commands as root (or using sudo) whenever necessary.
The QEMU system is set up to get its root file system from a local loopback block device or over NFS from the host system (requires networking between the host system and the QEMU guest). The host's networking can then be configured to get its IP address using DHCP.
Using QEMU with libvirt
libvirt is a virtualization management framework and toolkit. At the tool level, it provides the
virsh virtualization shell as well as the
virt-manager GUI tool for command-line VM management (plus additional tools).
By using libvirt to manage ARM VMs, you can leverage it's capabilities (such as domain autostart, network setup with NAT and DHCP, and console disconnect/reconnect), and manage your ARM and x86 VMs in a consistent manner.
Here is a quick-start guide to setting up ARM QEMU emulation under libvirt management:
Installing and starting the virtualization software
These steps install libvirt and related tools, if not installed already, plus the ARM emulator, and then start the libvirt daemon:
yum groupinstall virtualization yum install qemu-system-arm service libvirtd start
Installing the ARM root filesystem and XML
These steps download a 4GB pre-built ext3 root image, ARM kernel, and libvirt XML domain definition, then define a VM to use them:
cd /var/lib/libvirt/images wget http://ftp.linux.org.uk/pub/linux/arm/fedora/qemu/zImage-versatile-2.6.24-rc7.armv5tel \ http://cdot.senecac.on.ca/arm/arm1.xml \ http://cdot.senecac.on.ca/arm/arm1.img.gz gunzip arm1.img.gz restorecon * virsh define arm1.xml
Note that the virtual machine definition cannot be performed using virt-manager at this time (F12) -- use virsh as shown above.
Booting the VM
You can now boot and access the VM using the virt-manager tool (Applications>System Tools>Virtual Machine Manager on the menu), or you can control it from the command line:
virsh start arm1
To view the graphical display without using virt-manager, use the virt-viewer command:
Or, alternately, use
virsh vncdisplay arm1 and then use the vncviewer program (from the tigervnc package) to view the VM console.
Using networking in the VM
You can get a DHCP address for the VM using
dhclient eth1, or set up a static IP configuration. Once you have IP configured, you can:
- Use ssh instead of the console to access a shell on the VM (faster, and more flexible)
yumto install and remove software
Creating additional ARM VMs
For each additional ARM VM you wish to create:
- Make a new copy of the
arm1.imgfile under a different name in
- Edit the XML, making the following changes:
- Change the UUID (you can use
uuidgento generate a new one)
- Change the image filename (in the
sourcetag in the
devicessection) to point to the new image file you just created.
- Change the UUID (you can use
virsh define nameOfXMLFileto define the new VM from the modified XML file.
Using QEMU without libvirt
If you are running Fedora 7/8, you can just install qemu using yum.
yum install qemu
You can skip this section if you are going to use a local loopback device for your root file system. However that may prevent you from using yum to install new packages on your Fedora-ARM guest.
Networking is setup between the host system and the QEMU guest to allow the guest to get its IP address using DHCP.
The networking setup uses host TAP devices to connect to QEMU. In recent kernels, this requires CAP_NET_ADMIN capability. The host system needs to have TUN/TAP networking enabled (CONFIG_TUN=m or CONFIG_TUN=y). You can verify this using:
grep CONFIG_TUN= /boot/config-`uname -r`
Also make sure that /dev/net/tun exists. If not, you can make it as follows:
mknod /dev/net/tun c 10 200
Now, we need to set up a network bridge interface. Install some utilities to configure a ethernet bridge:
# yum install bridge-utils
/usr/sbin/brctl addbr br0 /sbin/ifconfig eth0 0.0.0.0 promisc up /usr/sbin/brctl addif br0 eth0 /sbin/dhclient br0 /sbin/iptables -F FORWARD
Also, create a script qemu-ifup as follows. This will be needed when we boot into QEMU.
#!/bin/sh /sbin/ifconfig $1 0.0.0.0 promisc up /usr/sbin/brctl addif br0 $1
Setup Kernel Image
You can either simply use a pre-built kernel image or build your own from source.
Pre-built Kernel Image
You can get one of the following pre-built kernel images for ARM:
- zImage-qemu-versatile-3.0.8-4.fc17.armv5tel (right click -> Save as ...)
The README file has documentation as to how to build these particular zImage-versatile: 
Build Kernel Image From Source
You will need to have an ARM cross-compiler. If you do not have one, download one from CodeSourcery's web-site, install it and ensure that is it in your path.
export ARCH=arm export CROSS_COMPILE=arm-none-linux-gnueabi-
You can also use the Fedora cross toolchain that we provide.
Download Linux kernel (I have tested it with 2.6.21 and 2.6.22) and build it for ARM Versatile board. But, first you will have to customize the defconfig for it to work correctly.
cp arch/arm/configs/versatile_defconfig .config make menuconfig Enable DHCP Support (CONFIG_IP_PNP_DHCP). It is under Networking -> Networking Support -> Networking Options -> TCP/IP Networking -> IP: Kernel Level autoconfiguration. Enable Universal Tun/Tap Driver Support (CONFIG_TUN). It is under Device Drivers -> Network Device Support -> Network Device Support. Enable ARM EABI Support (CONFIG_AEABI). It is under Kernel Features. Enable tmpfs support (CONFIG_TMPFS). It is under File Systems -> Pseudo File Systems. If you will be booting from a file system image (not NFS), then the following steps should also be taken: Enable PCI support (CONFIG_PCI). It is under Bus Support. Enable SCSI Device Support. It is under Device Drivers -> SCSI Device Support. Enable SCSI Disk Support. It is under Device Drivers -> SCSI Device Support. Enable SYM53C8XX Version 2 SCSI Support. It is under Device Drivers -> SCSI Device Support -> SCSI low-level drivers Optionally you may enable: Device Drivers --> Serial ATA and Parallel ATA drivers ---> Marvell SATA support (I believe this supports the Marvell SATA drivers found on Marvell processor based Plug computers Sheeva, OpenRD, Guruplug, etc.) Build the kernel: make all make zImage (the kernel image will be located in arch/arm/boot named zImage ) make modules_install INSTALL_MOD_PATH=$TARGETDIR ($TARGETDIR needs to be an alternate directory. These are the kernel modules you copy to /lib/modules/kernel_version_number on your rootfs.)
Setup Root File System
Download the latest root file system tarball.
Root File System On Loopback Device
Create a loopback device -- 4GB is a reasonable size.
dd if=/dev/zero of=rootfs-f10-dev bs=1024k count=4096
Create a file system.
mkfs.ext3 rootfs-f10-dev -L arm
or for newer rootfs version, e.g. F17:
mkfs.ext3 rootfs-f10-dev -L rootfs
The label or UUID must be the same as the LABEL= or UUID= for / in /etc/fstab inside the root file system, otherwise the read-write remount of / will fail during boot.
Prepare the root file-system. This assumes that the loopback device is mounted under /mnt/ARM_FS.
mount rootfs-f10-dev /mnt/ARM_FS -o loop tar -xjf rootfs-f10.tar.bz2 -C /mnt/ARM_FS mv /mnt/ARM_FS/rootfs-f10/* /mnt/ARM_FS rm -rf /mnt/ARM_FS/rootfs-f10 copy your kernel modules from $TARGETDIR from the kernel build to /lib/modules/kernel_number ie /lib/modules/18.104.22.168 umount rootfs-f10-dev
Root File System Over NFS
Download the latest root filesystem tarball from http://ftp.linux.org.uk/pub/linux/arm/fedora/rootfs/ and untar it.
This assumes that you untarred the root file system in /mnt/ARM_FS. We need to export it through NFS. Add the following in your /etc/exports.
Now, restart the NFS service.
/sbin/service nfs restart
Boot into QEMU
Now you are ready to boot into QEMU. Replace <host-ip> with the IP address of the host machine.
qemu-system-arm -M versatilepb -kernel zImage-versatile -append root="/dev/nfs nfsroot=<host-ip>:/mnt/ARM_FS rw ip=dhcp" \ -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=./qemu-ifup
If you're using the raw image instead of NFS, try this instead:
qemu-system-arm -M versatilepb -kernel zImage-versatile -hdc rootfs-f10-dev -append root="0800" \ -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=./qemu-ifup