From Fedora Project Wiki
(Use IP address for link until we can fix certificate issues.)
No edit summary
 
(33 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Download the latest disk image =
This page describes the steps necessary to get Fedora for RISC-V running on emulated hardware.


<!-- http://fedora-riscv.tranquillity.se/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id -->
= Quickstart =
Go [http://185.97.32.145/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id to this link for the nightly builds] and select the most recent (top) build.  Look for the <code>-sda.raw.xz</code> file and download it.  It will usually be quite large, around 200-300 MB.


Uncompress it:
This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.
 
First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/
 
As of this writing, the most recent image is <code>Fedora-Minimal-40-20240502.n.0-sda.raw.xz</code> so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.
 
Once you've downloaded the image, start by uncompressing it:


<pre>
<pre>
$ unxz Fedora-Developer-Rawhide-xxxx.n.0-sda.raw.xz
$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz
</pre>
</pre>


== Root password ==
You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The <code>virt-filesystems</code> utility, part of the <code>guestfs-tools</code> package, takes care of that:


<code>riscv</code>
<pre>
$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3
</pre>


= Boot under TinyEMU (RISCVEMU) =
Additionally, you need to extract the kernel and initrd from the disk image. The <code>virt-get-kernel</code> tool automates this step:


RISCVEMU recently (2018-09-23) was renamed to TinyEMU (https://bellard.org/tinyemu/).
<pre>
$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img
</pre>


TinyEMU allow booting Fedora disk images in TUI and GUI modes. You can experiment using JSLinux (no need to download/compile/etc) here: https://bellard.org/jslinux/
Now move all the files to a directory that libvirt has access to:


Below are instructions how to boot Fedora into X11/Fluxbox GUI mode.
<pre>
$ sudo mv \
    Fedora-Minimal-40-20240502.n.0-sda.raw \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/
</pre>


'''Step 1'''. Compile TinyEMU:
At this point, everything is ready and you can create the libvirt VM:


<pre>
<pre>
wget https://bellard.org/tinyemu/tinyemu-2018-09-23.tar.gz
$ virt-install \
tar xvf tinyemu-2018-09-23.tar.gz
    --import \
cd tinyemu-2018-09-23
    --name fedora-riscv \
make
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none
</pre>
</pre>


'''Step 2'''. Setup for booting Fedora:
Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.
<pre>
mkdir fedora
cd fedora
cp ../temu .


# Download pre-built BBL with embedded kernel
Disabling the TPM with <code>--tpm none</code> is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.
wget https://bellard.org/jslinux/bbl64-4.15.bin


# Create configuration file for TinyEMU
You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.
cat <<EOF > root-riscv64.cfg
/* VM configuration file */
{
    version: 1,
    machine: "riscv64",
    memory_size: 1400,
    bios: "bbl64-4.15.bin",
    cmdline: "loglevel=3 console=tty0 root=/dev/vda1 rw TZ=${TZ}",
    drive0: { file: "Fedora-Developer-Rawhide-xxxx.n.0-sda.raw" },
    eth0: { driver: "user" },
    display0: {
        device: "simplefb",
        width: 1920,
        height: 1080,
    },
    input_device: "virtio",
}
EOF


# Download disk image and unpack in the same directory
To shut down the VM, run <code>poweroff</code> inside the guest OS. To boot it up again, use
</pre>


'''Step 3'''. Boot it.
<pre>
<pre>
./temu -rw root-riscv64.cfg
$ virsh start fedora-riscv --console
</pre>
</pre>


We need to use <code>-rw</code> if we want our changes to persist in disk image. Otherwise disk image will be loaded as read-only and all changes will not persist after reboot.


Once the system is booted login as <code>root</code> with <code>riscv</code> as password. Finally start X11 with Fluxbox: <code>startx /usr/bin/startfluxbox</code>. To gratefully shutdown just type <code>poweroff</code> into console.
= UKI images =


The disk image also incl. awesome and i3 for testing. Dillo is available as a basic web browser (no javascript support) and pcmanfm as file manager.
These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is <code>Fedora.riscv64-40-20240429.n.0.qcow2</code>.


= Boot under qemu =
The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:
 
You will need a very recent version of qemu.  If in doubt, compile from upstream qemu sources.
 
Get [https://fedorapeople.org/groups/risc-v/disk-images/bbl bbl from here] or [https://github.com/rwmjones/fedora-riscv-kernel compile it from source].


<pre>
<pre>
qemu-system-riscv64 \
$ virt-copy-out \
     -nographic \
     -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    -machine virt \
     /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    -smp 4 \
     .
     -m 2G \
    -kernel bbl \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-device,rng=rng0 \
    -append "console=ttyS0 ro root=/dev/vda1" \
    -device virtio-blk-device,drive=hd0 \
    -drive file=Fedora-Developer-Rawhide-xxxx.n.0-sda.raw,format=raw,id=hd0 \
     -device virtio-net-device,netdev=usernet \
    -netdev user,id=usernet,hostfwd=tcp::10000-:22
</pre>
</pre>


You can also create <code>qcow2</code> disk image with <code>raw</code> Fedora disk as backing one. This way Fedora <code>raw</code> is unmodified and all changes are written to <code>qcow2</code> layer. You will need to install <code>libguestfs-tools-c</code>.
The <code>virt-install</code> command line is slightly different too, in particular the <code>--boot</code> option becomes:


<pre>
<pre>
qemu-img create -f qcow2 -F raw -b Fedora-Developer-Rawhide-xxxx.n.0-sda.raw tmp.qcow2 20G
--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda.raw tmp.qcow2
virt-filesystems --long -h --all -a tmp.qcow2
</pre>
</pre>


Then modify above QEMU invocation by changing <code>-drive</code> option to:
These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for <code>cloud-init</code>, for example with the following contents:
 
<pre>
<pre>
-drive file=tmp.qcow2,id=hd0
#cloud-config
</pre>
 
You can use up to <code>-smp 8</code>, which is helpful for building large projects.


Once machine is booted you can connect via SSH:
password: fedora_rocks!
<pre>
chpasswd:
ssh -p 10000 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@localhost
  expire: false
</pre>
</pre>


= Boot with libvirt =
Save this as `user-data.yml`, then add the following options to your <code>virt-install</code> command line:


Detailed instructions how to install libvirt: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/
Quick instructions for libvirt installation (tested on Fedora 29):
<pre>
<pre>
dnf group install --with-optional virtualization
--controller scsi,model=virtio-scsi \
systemctl start libvirtd
--cloud-init user-data=user-data.yml
systemctl enable libvirtd
</pre>
</pre>


Needs <b>libvirt &ge; 4.7.0</b> which is the first version with upstream RISC-V support. This is available in Fedora 29. You should be able to boot the disk image using a command similar to this:
The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


<pre>
# virt-install \
    --name fedora-riscv \
    --arch riscv64 \
    --machine virt \
    --vcpus 4 \
    --memory 2048 \
    --import \
    --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-xxxx.n.0-sda.raw,bus=virtio \
    --boot kernel=/var/lib/libvirt/images/bbl,kernel_args="console=ttyS0 ro root=/dev/vda1" \
    --network network=default,model=virtio \
    --rng device=/dev/urandom,model=virtio \
    --graphics none
</pre>


Note that will automatically boot you into the console. If you don't want that add <code>--noautoconsole</code> option. You can later use <code>virsh</code> tool to manage your VM and get to console.
= Host setup =


You can use up to <code>--vcpus 8</code>, which is helpful for large projects.
The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.


Add <code>--channel name=org.qemu.guest_agent.0</code> option if you want <code>virsh shutdown <name></code> to work. This requires <code>qemu-guest-agent</code> to be installed in disk image (available in Developer, GNOME and Minimal disk images starting Oct 15, 2018, but not in Nano disk images).
At the very least, the following package will need to be installed:


If you want to change hostname before 1st boot install <code>libguestfs-tools-c</code> and then:
<pre>
<pre>
virt-customize -a Fedora-Developer-Rawhide-xxxx.n.0-sda.raw --hostname fedora-riscv-mymagicbox
$ sudo dnf install \
    libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64
</pre>
</pre>


A quick reference of <code>virsh</code> commands:
This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add <code>libvirt-daemon-qemu</code> and <code>libvirt-daemon-config-nwfilter</code> to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).
* <code>virsh list --all</code> - list all VMs and their states
* <code>virsh console <name></code> - connect to serial console (remember: <code>Escape character is ^]</code>)
* <code>virsh shutdown <name></code> - power down VM (see above for more details)
* <code>virsh start <name></code> - power up VM
* <code>virsh undefine <name></code> - remove VM
* <code>virsh net-list</code> - list network (useful for the next command)
* <code>virsh net-dhcp-leases <network_name></code> - list DHCP leases, <code><network_name></code> most likely will be <code>default</code>. This is useful when you want to get IPv4 and SSH to the VM.
* <code>virsh domifaddr <name></code> - alternative for the above two commands, only shows IPv4 for one VM
* <code>virsh reset <name></code> - hard reset VM
* <code>virsh destroy <name></code> hard power down of VM
 
If you want to use <code>ssh user@virtualMachine</code> you can setup libvirt NSS module. See: https://wiki.libvirt.org/page/NSS_module


You might want to expand <code>raw</code> Fedora disk image before setting up VM. Here is one example:
In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:


<pre>
<pre>
truncate -r Fedora-Developer-Rawhide-xxxx.n.0-sda.raw xxxx.n.0-sda.raw
$ sudo usermod -a -G libvirt $(whoami)
truncate -s 40G xxxx.n.0-sda.raw
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda.raw xxxx.n.0-sda.raw
virt-filesystems --long -h --all -a xxxx.n.0-sda.raw
virt-df -h -a xxxx.n.0-sda.raw
</pre>
</pre>


Then adjust <code>--disk path=</code> option in libvirt invocation.
Finally, the default libvirt URI needs to be configured:
 
You might want also to setup logging for serial console (in case kernel panics or something else).
 
For this we will be using two commands: <code>virsh edit <name></code> (modifying VM XML) and <code>virsh dumpxml <name></code> (dump VM XML for backup). You need to modify <code><serial></code> node by adding <code><log file='/var/log/libvirt/qemu/fedora-riscv-mymagicbox.serial.log'/></code>. Then power down and power up the VM.
 
= Install on the HiFive Unleashed SD card =
 
These are instructions for the [https://www.sifive.com/products/hifive-unleashed/ HiFive Unleashed board].
 
The disk image (above) is partitioned, but usually we need an unpartitioned ("naked") filesystem.  There are several ways to get this, but the easiest is:


<pre>
<pre>
$ guestfish -a Fedora-Developer-Rawhide-xxxx.n.0-sda.raw \
$ mkdir -p ~/.config/libvirt && \
    run : download /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda1.raw
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf
</pre>
</pre>


This creates a naked ext4 filesystem called <code>*-sda1.raw</code>.  The naked ext4 filesystem can be copied over the second partition of the SD card.
Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.


You can also build a custom bbl+kernel+initramfs to boot directly into the SD card using [https://github.com/rwmjones/fedora-riscv-kernel these sources].
After rebooting and logging back in, <code>virsh</code> should work and the default network should be up:


= Install on the HiFive Unleashed using NBD server =
<pre>
$ virsh uri
qemu:///system


Look at https://github.com/rwmjones/fedora-riscv-kernel in the <code>sifive_u540</code> branch. This is quite complex to set up so it's best to ask on the <code>#fedora-riscv</code> IRC channel.
$ virsh net-list
 
Name      State    Autostart  Persistent
= Install Fedora GNOME Desktop on SiFive HiFive Unleashed + Microsemi HiFive Unleashed Expansion board =
--------------------------------------------
 
  default  active  yes        yes
Detailed instructions are provided by Atish Patra from Western Digital Corporation (WDC). See their GitHub page for details and pictures: https://github.com/westerndigitalcorporation/RISC-V-Linux
</pre>


So far two GPUs are confirmed to be working: Radeon HD 6450 and Radeon HD 5450.
All done! You can now start creating RISC-V VMs.

Latest revision as of 14:50, 28 June 2024

This page describes the steps necessary to get Fedora for RISC-V running on emulated hardware.

Quickstart

This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.

First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/

As of this writing, the most recent image is Fedora-Minimal-40-20240502.n.0-sda.raw.xz so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.

Once you've downloaded the image, start by uncompressing it:

$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz

You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The virt-filesystems utility, part of the guestfs-tools package, takes care of that:

$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3

Additionally, you need to extract the kernel and initrd from the disk image. The virt-get-kernel tool automates this step:

$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img

Now move all the files to a directory that libvirt has access to:

$ sudo mv \
    Fedora-Minimal-40-20240502.n.0-sda.raw \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/

At this point, everything is ready and you can create the libvirt VM:

$ virt-install \
    --import \
    --name fedora-riscv \
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none

Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.

Disabling the TPM with --tpm none is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.

You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.

To shut down the VM, run poweroff inside the guest OS. To boot it up again, use

$ virsh start fedora-riscv --console


UKI images

These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is Fedora.riscv64-40-20240429.n.0.qcow2.

The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:

$ virt-copy-out \
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    .

The virt-install command line is slightly different too, in particular the --boot option becomes:

--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'

These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for cloud-init, for example with the following contents:

#cloud-config

password: fedora_rocks!
chpasswd:
  expire: false

Save this as user-data.yml, then add the following options to your virt-install command line:

--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml

The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


Host setup

The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.

At the very least, the following package will need to be installed:

$ sudo dnf install \
    libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64

This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add libvirt-daemon-qemu and libvirt-daemon-config-nwfilter to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).

In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:

$ sudo usermod -a -G libvirt $(whoami)

Finally, the default libvirt URI needs to be configured:

$ mkdir -p ~/.config/libvirt && \
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf

Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.

After rebooting and logging back in, virsh should work and the default network should be up:

$ virsh uri
qemu:///system

$ virsh net-list
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

All done! You can now start creating RISC-V VMs.