From Fedora Project Wiki
m (Remove pointers to stage4 BBL)
(Add host setup section)
 
(24 intermediate revisions by 7 users not shown)
Line 1: Line 1:
= Download the latest disk image =
This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.


<!-- http://fedora-riscv.tranquillity.se/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id -->
= Quickstart =
Go [http://185.97.32.145/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id to this link for the nightly builds] and select the most recent (top) build.  Look for the <code>-sda.raw.xz</code> file and download it.  It will usually be quite large, around 500-1000 MB.


Uncompress it:
This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.


<pre>
First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/
$ unxz Fedora-Developer-Rawhide-xxxx.n.0-sda.raw.xz
</pre>


== Tested disk images ==
As of this writing, the most recent image is <code>Fedora-Minimal-40-20240502.n.0-sda.raw.xz</code> so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.


You can find them here: https://dl.fedoraproject.org/pub/alt/risc-v/disk-images/fedora/
Once you've downloaded the image, start by uncompressing it:


These disk images:
<pre>
* use "naked" filesystem (<code>/dev/vda</code>, not <code>/dev/vda1</code>);
$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz
* have SELinux set to enforcing=1
</pre>
* have restored default SELinux security context (<code>restorecon -Rv /</code>);
* kernel, initramfs, config and BBL copied out from <code>/boot</code> directory;
* disk mage is compressed with <code>xz</code> using 16MiB block size;
* have been booted in QEMU/libvirt a few times to verify;


== Root password ==
You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The <code>virt-filesystems</code> utility, part of the <code>guestfs-tools</code> package, takes care of that:


<code>riscv</code>
<pre>
$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3
</pre>


== Getting kernel, config, initramfs, BBL ==
Additionally, you need to extract the kernel and initrd from the disk image. The <code>virt-get-kernel</code> tool automates this step:
 
Fedora/RISCV does not support BLS (Boot Loader Specification) [https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault More details]. The new disk images contain <code>/boot</code> directory from where you can copy out kernel, config, initramfs, BBL (contains embedded kernel). Example:


<pre>
<pre>
$ export LIBGUESTFS_BACKEND=direct
$ virt-get-kernel \
$ guestfish \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
  add Fedora-Developer-Rawhide-20190126.n.0-sda.raw : run : \
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
  mount /dev/sda1 / : ls /boot | grep -E '^(bbl|config|initramfs|vmlinuz)' | grep -v rescue
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img
bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img
vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
$ guestfish \
  add Fedora-Developer-Rawhide-20190126.n.0-sda.raw : run : mount /dev/sda1 / : \
  download /boot/bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 : \
  download /boot/config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 : \
  download /boot/initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img : \
  download /boot/vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
</pre>
</pre>


You can also use <code>guestmount</code> or QEMU/NBD to mount disk image. Examples:
Now move all the files to a directory that libvirt has access to:
<pre>
$ mkdir a
$ guestmount -a $PWD/Fedora-Developer-Rawhide-20190126.n.0-sda.raw -m /dev/sda1 $PWD/a
$ cp a/boot/bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 .
$ guestunmount $PWD/a
</pre>


<pre>
<pre>
$ sudo modprobe nbd max_part=8
$ sudo mv
$ sudo qemu-nbd -f raw --connect=/dev/nbd1 $PWD/Fedora-Developer-Rawhide-20190126.n.0-sda.raw
    Fedora-Minimal-40-20240502.n.0-sda.raw \
$ sudo fdisk -l /dev/nbd1
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
Disk /dev/nbd1: 7.5 GiB, 8053063680 bytes, 15728640 sectors
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
Units: sectors of 1 * 512 = 512 bytes
    /var/lib/libvirt/images/
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F0F4268F-1B73-46FF-BABA-D87F075DCCA5
 
Device      Start      End  Sectors  Size Type
/dev/nbd1p1  2048 15001599 14999552  7.2G Linux filesystem
$ sudo mount /dev/nbd1p1 a
$ sudo cp a/boot/initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img .
$ sudo umount a
$ sudo qemu-nbd --disconnect /dev/nbd1
</pre>
</pre>


Note NBD is highly useful if you need to run <code>fdisk</code>,  <code>e2fsck</code> (e.g. after VM crash and filesystem lock up), <code>resize2fs</code>. It might be beneficial to look into <code>nbdkit</code> and <code>nbd</code> packages.
At this point, everything is ready and you can create the libvirt VM:
 
= Boot under TinyEMU (RISCVEMU) =
 
RISCVEMU recently (2018-09-23) was renamed to TinyEMU (https://bellard.org/tinyemu/).
 
TinyEMU allow booting Fedora disk images in TUI and GUI modes. You can experiment using JSLinux (no need to download/compile/etc) here: https://bellard.org/jslinux/
 
Below are instructions how to boot Fedora into X11/Fluxbox GUI mode.
 
'''Step 1'''. Compile TinyEMU:


<pre>
<pre>
wget https://bellard.org/tinyemu/tinyemu-2018-09-23.tar.gz
$ virt-install \
tar xvf tinyemu-2018-09-23.tar.gz
    --import \
cd tinyemu-2018-09-23
    --name fedora-riscv \
make
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none
</pre>
</pre>


'''Step 2'''. Setup for booting Fedora:
Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.
<pre>
mkdir fedora
cd fedora
cp ../temu .


# Download pre-built BBL with embedded kernel
Disabling the TPM with <code>--tpm none</code> is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.
wget https://bellard.org/jslinux/bbl64-4.15.bin


# Create configuration file for TinyEMU
You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.
cat <<EOF > root-riscv64.cfg
/* VM configuration file */
{
    version: 1,
    machine: "riscv64",
    memory_size: 1400,
    bios: "bbl64-4.15.bin",
    cmdline: "loglevel=3 console=tty0 root=/dev/vda1 rw TZ=${TZ}",
    drive0: { file: "Fedora-Developer-Rawhide-xxxx.n.0-sda.raw" },
    eth0: { driver: "user" },
    display0: {
        device: "simplefb",
        width: 1920,
        height: 1080,
    },
    input_device: "virtio",
}
EOF


# Download disk image and unpack in the same directory
To shut down the VM, run <code>poweroff</code> inside the guest OS. To boot it up again, use
</pre>


'''Step 3'''. Boot it.
<pre>
<pre>
./temu -rw root-riscv64.cfg
$ virsh start fedora-riscv --console
</pre>
</pre>


We need to use <code>-rw</code> if we want our changes to persist in disk image. Otherwise disk image will be loaded as read-only and all changes will not persist after reboot.
Once the system is booted login as <code>root</code> with <code>riscv</code> as password. Finally start X11 with Fluxbox: <code>startx /usr/bin/startfluxbox</code>. To gratefully shutdown just type <code>poweroff</code> into console.


The disk image also incl. awesome and i3 for testing. Dillo is available as a basic web browser (no javascript support) and pcmanfm as file manager.
= UKI images =


= Boot under QEMU =
These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is <code>Fedora.riscv64-40-20240429.n.0.qcow2</code>.


You will need a very recent version of qemu.  If in doubt, compile from upstream qemu sources.
The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:


<pre>
<pre>
qemu-system-riscv64 \
$ virt-copy-out \
     -nographic \
     -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    -machine virt \
     /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    -smp 4 \
     .
     -m 2G \
    -kernel bbl \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-device,rng=rng0 \
    -append "console=ttyS0 ro root=/dev/vda1" \
    -device virtio-blk-device,drive=hd0 \
    -drive file=Fedora-Developer-Rawhide-xxxx.n.0-sda.raw,format=raw,id=hd0 \
     -device virtio-net-device,netdev=usernet \
    -netdev user,id=usernet,hostfwd=tcp::10000-:22
</pre>
</pre>


The latest disk images (since late 2018) build a proper Fedora kernel, incl. loadable modules and initramfs. If you are using newer disk images you might want to add initramfs option:
The <code>virt-install</code> command line is slightly different too, in particular the <code>--boot</code> option becomes:


<pre>
<pre>
    -initrd initramfs-<..>.img
--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'
</pre>
</pre>


You can also create <code>qcow2</code> disk image with <code>raw</code> Fedora disk as backing one. This way Fedora <code>raw</code> is unmodified and all changes are written to <code>qcow2</code> layer. You will need to install <code>libguestfs-tools-c</code>.
These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for <code>cloud-init</code>, for example with the following contents:


<pre>
<pre>
qemu-img create -f qcow2 -F raw -b Fedora-Developer-Rawhide-xxxx.n.0-sda.raw tmp.qcow2 20G
#cloud-config
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda.raw tmp.qcow2
virt-filesystems --long -h --all -a tmp.qcow2
</pre>


Then modify above QEMU invocation by changing <code>-drive</code> option to:
password: fedora_rocks!
<pre>
chpasswd:
-drive file=tmp.qcow2,id=hd0
  expire: false
</pre>
</pre>


You can use up to <code>-smp 8</code>, which is helpful for building large projects.
Save this as `user-data.yml`, then add the following options to your <code>virt-install</code> command line:


Once machine is booted you can connect via SSH:
<pre>
<pre>
ssh -p 10000 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@localhost
--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml
</pre>
</pre>


= Boot with libvirt =
The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


Detailed instructions how to install libvirt: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/


Quick instructions for libvirt installation (tested on Fedora 29):
= Host setup =
<pre>
dnf group install --with-optional virtualization
systemctl start libvirtd
systemctl enable libvirtd
</pre>


Needs <b>libvirt &ge; 4.7.0</b> which is the first version with upstream RISC-V support. This is available in Fedora 29. You should be able to boot the disk image using a command similar to this:
The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.


<pre>
At the very least, the following package will need to be installed:
# virt-install \
    --name fedora-riscv \
    --arch riscv64 \
    --machine virt \
    --vcpus 4 \
    --memory 2048 \
    --import \
    --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-xxxx.n.0-sda.raw,bus=virtio \
    --boot kernel=/var/lib/libvirt/images/bbl,kernel_args="console=ttyS0 ro root=/dev/vda1" \
    --network network=default,model=virtio \
    --rng device=/dev/urandom,model=virtio \
    --graphics none
</pre>
 
The latest disk images (since late 2018) build a proper Fedora kernel, incl. loadable modules and initramfs. If you are using newer disk images you might want to add initramfs snippet <code>initrd=/var/lib/libvirt/images/initramfs</code> to <code>--boot</code> option:


<pre>
<pre>
     --boot kernel=/var/lib/libvirt/images/bbl,initrd=/var/lib/libvirt/images/initramfs,kernel_args="console=ttyS0 ro root=/dev/vda1" \
$ sudo dnf install \
     libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64
</pre>
</pre>


Note that will automatically boot you into the console. If you don't want that add <code>--noautoconsole</code> option. You can later use <code>virsh</code> tool to manage your VM and get to console.
This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add <code>libvirt-daemon-qemu</code> and <code>libvirt-daemon-config-nwfilter</code> to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).
 
You can use up to <code>--vcpus 8</code>, which is helpful for large projects.


Add <code>--channel name=org.qemu.guest_agent.0</code> option if you want <code>virsh shutdown <name></code> to work. This requires <code>qemu-guest-agent</code> to be installed in disk image (available in Developer, GNOME and Minimal disk images starting Oct 15, 2018, but not in Nano disk images).
In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:


If you want to change hostname before 1st boot install <code>libguestfs-tools-c</code> and then:
<pre>
<pre>
virt-customize -a Fedora-Developer-Rawhide-xxxx.n.0-sda.raw --hostname fedora-riscv-mymagicbox
$ sudo usermod -a -G libvirt $(whoami)
</pre>
</pre>


A quick reference of <code>virsh</code> commands:
Finally, the default libvirt URI needs to be configured:
* <code>virsh list --all</code> - list all VMs and their states
* <code>virsh console <name></code> - connect to serial console (remember: <code>Escape character is ^]</code>)
* <code>virsh shutdown <name></code> - power down VM (see above for more details)
* <code>virsh start <name></code> - power up VM
* <code>virsh undefine <name></code> - remove VM
* <code>virsh net-list</code> - list network (useful for the next command)
* <code>virsh net-dhcp-leases <network_name></code> - list DHCP leases, <code><network_name></code> most likely will be <code>default</code>. This is useful when you want to get IPv4 and SSH to the VM.
* <code>virsh domifaddr <name></code> - alternative for the above two commands, only shows IPv4 for one VM
* <code>virsh reset <name></code> - hard reset VM
* <code>virsh destroy <name></code> hard power down of VM
 
If you want to use <code>ssh user@virtualMachine</code> you can setup libvirt NSS module. See: https://wiki.libvirt.org/page/NSS_module
 
You might want to expand <code>raw</code> Fedora disk image before setting up VM. Here is one example:


<pre>
<pre>
truncate -r Fedora-Developer-Rawhide-xxxx.n.0-sda.raw xxxx.n.0-sda.raw
$ mkdir -p ~/.config/libvirt && \
truncate -s 40G xxxx.n.0-sda.raw
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda.raw xxxx.n.0-sda.raw
virt-filesystems --long -h --all -a xxxx.n.0-sda.raw
virt-df -h -a xxxx.n.0-sda.raw
</pre>
</pre>


Then adjust <code>--disk path=</code> option in libvirt invocation.
Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.


You might want also to setup logging for serial console (in case kernel panics or something else).
After rebooting and logging back in, <code>virsh</code> should work and the default network should be up:


For this we will be using two commands: <code>virsh edit <name></code> (modifying VM XML) and <code>virsh dumpxml <name></code> (dump VM XML for backup). You need to modify <code><serial></code> node by adding <code><log file='/var/log/libvirt/qemu/fedora-riscv-mymagicbox.serial.log'/></code>. Then power down and power up the VM.
<pre>
$ virsh uri
qemu:///system


Alternatively you can use <code>--serial log.file=/.../whatever.serial.log</code> with <code>virt-install</code> command.
$ virsh net-list
 
Name      State    Autostart  Persistent
If you want less information being displayed during boot then add <code>quiet</code> into <code>kernel_args</code> line, e.g.: <code>kernel_args="console=ttyS0 ro quiet root=/dev/vda1"</code>
--------------------------------------------
 
default  active  yes        yes
= Install on the HiFive Unleashed SD card =
 
These are instructions for the [https://www.sifive.com/products/hifive-unleashed/ HiFive Unleashed board].
 
The disk image (above) is partitioned, but usually we need an unpartitioned ("naked") filesystem.  There are several ways to get this, but the easiest is:
 
<pre>
$ guestfish -a Fedora-Developer-Rawhide-xxxx.n.0-sda.raw \
    run : download /dev/sda1 Fedora-Developer-Rawhide-xxxx.n.0-sda1.raw
</pre>
</pre>


This creates a naked ext4 filesystem called <code>*-sda1.raw</code>.  The naked ext4 filesystem can be copied over the second partition of the SD card.
All done! You can now start creating RISC-V VMs.
 
You can also build a custom bbl+kernel+initramfs to boot directly into the SD card using [https://github.com/rwmjones/fedora-riscv-kernel these sources].
 
= Install on the HiFive Unleashed using NBD server =
 
Look at https://github.com/rwmjones/fedora-riscv-kernel in the <code>sifive_u540</code> branch.  This is quite complex to set up so it's best to ask on the <code>#fedora-riscv</code> IRC channel.
 
= Install Fedora GNOME Desktop on SiFive HiFive Unleashed + Microsemi HiFive Unleashed Expansion board =
 
Detailed instructions are provided by Atish Patra from Western Digital Corporation (WDC). See their GitHub page for details and pictures: https://github.com/westerndigitalcorporation/RISC-V-Linux
 
So far two GPUs are confirmed to be working: Radeon HD 6450 and Radeon HD 5450.

Latest revision as of 16:54, 31 May 2024

This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.

Quickstart

This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.

First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/

As of this writing, the most recent image is Fedora-Minimal-40-20240502.n.0-sda.raw.xz so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.

Once you've downloaded the image, start by uncompressing it:

$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz

You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The virt-filesystems utility, part of the guestfs-tools package, takes care of that:

$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3

Additionally, you need to extract the kernel and initrd from the disk image. The virt-get-kernel tool automates this step:

$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img

Now move all the files to a directory that libvirt has access to:

$ sudo mv
    Fedora-Minimal-40-20240502.n.0-sda.raw \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/

At this point, everything is ready and you can create the libvirt VM:

$ virt-install \
    --import \
    --name fedora-riscv \
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none

Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.

Disabling the TPM with --tpm none is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.

You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.

To shut down the VM, run poweroff inside the guest OS. To boot it up again, use

$ virsh start fedora-riscv --console


UKI images

These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is Fedora.riscv64-40-20240429.n.0.qcow2.

The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:

$ virt-copy-out \
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    .

The virt-install command line is slightly different too, in particular the --boot option becomes:

--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'

These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for cloud-init, for example with the following contents:

#cloud-config

password: fedora_rocks!
chpasswd:
  expire: false

Save this as user-data.yml, then add the following options to your virt-install command line:

--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml

The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


Host setup

The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.

At the very least, the following package will need to be installed:

$ sudo dnf install \
    libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64

This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add libvirt-daemon-qemu and libvirt-daemon-config-nwfilter to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).

In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:

$ sudo usermod -a -G libvirt $(whoami)

Finally, the default libvirt URI needs to be configured:

$ mkdir -p ~/.config/libvirt && \
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf

Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.

After rebooting and logging back in, virsh should work and the default network should be up:

$ virsh uri
qemu:///system

$ virsh net-list
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

All done! You can now start creating RISC-V VMs.