Architectures/ARM/HowToQemu/zh-cn

zh-cn = 如何在QEMU上运行Fedora-ARM =

简介
QEMU is a well-known emulator that supports ARM platforms, and can be used to run the Fedora-ARM distribution. This provides a convenient platform to try out the distribution as well as to development and customization.

The howto describes a process to get the Fedora-ARM distribution running under QEMU. Although we have tested this on Fedora 12, most of the process should work on any other Linux system as well. We assumes that you can run commands as root (or using sudo) whenever necessary.

The QEMU system is set up to get its root file system from a local loopback block device or over NFS from the host system (requires networking between the host system and the QEMU guest). The host's networking can then be configured to get its IP address using DHCP.

结合libvirt使用QEMU
libvirt is a virtualization management framework and toolkit. At the tool level, it provides the  virtualization shell as well as the   GUI tool for command-line VM management (plus additional tools).

By using libvirt to manage ARM VMs, you can leverage it's capabilities (such as domain autostart, network setup with NAT and DHCP, and console disconnect/reconnect), and manage your ARM and x86 VMs in a consistent manner.

Here is a quick-start guide to setting up ARM QEMU emulation under libvirt management:

安装和启动虚拟化软件
These steps install libvirt and related tools, if not installed already, plus the ARM emulator, and then start the libvirt daemon:

yum groupinstall virtualization yum install qemu-system-arm service libvirtd start

安装ARM根文件系统和XML
These steps download a 4GB pre-built ext3 root image, ARM kernel, and libvirt XML domain definition, then define a VM to use them:

cd /var/lib/libvirt/images wget http://ftp.linux.org.uk/pub/linux/arm/fedora/qemu/zImage-versatile-2.6.24-rc7.armv5tel \ http://cdot.senecac.on.ca/arm/arm1.xml \ http://cdot.senecac.on.ca/arm/arm1.img.gz gunzip arm1.img.gz restorecon * virsh define arm1.xml

Note that the virtual machine definition cannot be performed using virt-manager at this time (F12) -- use virsh as shown above.

启动虚拟机（VM）
You can now boot and access the VM using the virt-manager tool (Applications>System Tools>Virtual Machine Manager on the menu), or you can control it from the command line:

virsh start arm1

To view the graphical display without using virt-manager, use the virt-viewer command:

virt-viewer arm1

Or, alternately, use  and then use the vncviewer program (from the tigervnc package) to view the VM console.

在虚拟机（VM）中使用网络
You can get a DHCP address for the VM using, or set up a static IP configuration. Once you have IP configured, you can:


 * Use ssh instead of the console to access a shell on the VM (faster, and more flexible)
 * Use  to install and remove software

创建另外的ARM虚拟机
For each additional ARM VM you wish to create:


 * Make a new copy of the  file under a different name in
 * Edit the XML, making the following changes:
 * Change the UUID (you can use  to generate a new one)
 * Change the image filename (in the  tag in the   section) to point to the new image file you just created.
 * Use  to define the new VM from the modified XML file.

安装QEMU
If you are running Fedora 7/8, you can just install qemu using yum. yum install qemu

安装网络
You can skip this section if you are going to use a local loopback device for your root file system. However that may prevent you from using yum to install new packages on your Fedora-ARM guest.

Networking is setup between the host system and the QEMU guest to allow the guest to get its IP address using DHCP.

The networking setup uses host TAP devices to connect to QEMU. In recent kernels, this requires CAP_NET_ADMIN capability. The host system needs to have TUN/TAP networking enabled (CONFIG_TUN=m or CONFIG_TUN=y). You can verify this using: grep CONFIG_TUN= /boot/config-`uname -r`

Also make sure that /dev/net/tun exists. If not, you can make it as follows: mknod /dev/net/tun c 10 200

Now, we need to set up a network bridge interface. Install some utilities to configure a ethernet bridge:
 * 1) yum install bridge-utils

/usr/sbin/brctl addbr br0 /sbin/ifconfig eth0 0.0.0.0 promisc up /usr/sbin/brctl addif br0 eth0 /sbin/dhclient br0 /sbin/iptables -F FORWARD

Also, create a script qemu-ifup as follows. This will be needed when we boot into QEMU. /sbin/ifconfig $1 0.0.0.0 promisc up /usr/sbin/brctl addif br0 $1
 * 1) !/bin/sh

安装内核镜像 Kernel Image
You can either simply use a pre-built kernel image or build your own from source.

预构建内核镜像Pre-built Kernel Image
You can get one of the following pre-built kernel images for ARM:

1. zImage-versatile-2.6.24-rc7.armv5tel

1. zImage-versatile-2.6.23-rc4

1. zImage-versatile-2.6.22

The README file has documentation as to how to build these particular zImage-versatile: 

从源代码构建内核镜像文件 Build Kernel Image From Source
You will need to have an ARM cross-compiler. If you do not have one, download one from CodeSourcery's web-site, install it and ensure that is it in your path.

export ARCH=arm export CROSS_COMPILE=arm-none-linux-gnueabi-

You can also use the Fedora  cross toolchain  that we provide.

Download Linux kernel (I have tested it with 2.6.21 and 2.6.22) and build it for ARM Versatile board. But, first you will have to customize the defconfig for it to work correctly. cp arch/arm/configs/versatile_defconfig .config make menuconfig

Enable DHCP Support (CONFIG_IP_PNP_DHCP). It is under Networking -> Networking Support -> Networking Options -> TCP/IP Networking -> IP: Kernel Level autoconfiguration.

Enable Universal Tun/Tap Driver Support (CONFIG_TUN). It is under Device Drivers -> Network Device Support -> Network Device Support.

Enable ARM EABI Support (CONFIG_AEABI). It is under Kernel Features.

Enable tmpfs support (CONFIG_TMPFS). It is under File Systems -> Pseudo File Systems.

If you will be booting from a file system image (not NFS), then the following steps should also be taken:

Enable PCI support (CONFIG_PCI). It is under Bus Support.

Enable SCSI Device Support. It is under Device Drivers -> SCSI Device Support.

Enable SCSI Disk Support. It is under Device Drivers -> SCSI Device Support.

Enable SYM53C8XX Version 2 SCSI Support. It is under Device Drivers -> SCSI Device Support -> SCSI low-level drivers

Optionally you may enable: Device Drivers --> Serial ATA and Parallel ATA drivers ---> Marvell SATA support (I believe this supports the Marvell SATA drivers found on Marvell processor based Plug computers Sheeva, OpenRD, Guruplug, etc.)

Build the kernel:

make all

make zImage (the kernel image will be located in arch/arm/boot named zImage )

make modules_install INSTALL_MOD_PATH=$TARGETDIR ($TARGETDIR needs to be an alternate directory.  These are the kernel modules you copy to /lib/modules/kernel_version_number on your rootfs.)

安装根文件系统 Root File System
Download the latest root file system tarball.

在环回设备上的根文件系统 Root File System On Loopback Device
Create a loopback device -- 4GB is a reasonable size. dd if=/dev/zero of=rootfs-f10-dev bs=1024k count=4096

Create a file system. mkfs.ext3 rootfs-f10-dev -L arm

Prepare the root file-system. This assumes that the loopback device is mounted under /mnt/ARM_FS. mount rootfs-f10-dev /mnt/ARM_FS -o loop tar -xjf rootfs-f10.tar.bz2 -C /mnt/ARM_FS mv /mnt/ARM_FS/rootfs-f10/* /mnt/ARM_FS rm -rf /mnt/ARM_FS/rootfs-f10

copy your kernel modules from $TARGETDIR from the kernel build to /lib/modules/kernel_number ie /lib/modules/2.6.33.8

umount rootfs-f10-dev

Boot into QEMU
Now you are ready to boot into QEMU. Replace  with the IP address of the host machine.

qemu-system-arm -M versatilepb -kernel zImage-versatile -append root="/dev/nfs nfsroot=:/mnt/ARM_FS rw ip=dhcp" \ -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=./qemu-ifup

If you're using the raw image instead of NFS, try this instead:

qemu-system-arm -M versatilepb -kernel zImage-versatile -hdc rootfs-f10-dev -append root="0800" \ -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=./qemu-ifup