From Fedora Project Wiki

(update instructions for use of 172. IP range)
(network-scripts not default installed anymore at least with f32.)
 
Line 8: Line 8:
  dnf install tunctl
  dnf install tunctl
  dnf install iptables-services
  dnf install iptables-services
dnf install network-scripts


Create the bridge config file, {{filename|/etc/sysconfig/network-scripts/ifcfg-br0}}, with these contents:
Create the bridge config file, {{filename|/etc/sysconfig/network-scripts/ifcfg-br0}}, with these contents:

Latest revision as of 17:43, 24 November 2020

If you want to run the openQA tests that rely on advanced networking, you must configure it. See the upstream documentation. You can actually configure the network in any way (using openvswitch, VDE, or any other software-defined networking system you like) so long as it meets openQA's expectations: there must be a bridge with IP 172.16.2.2 (the upstream default is 10.0.2.2, but Fedora's tests are written to expect 172.16.2.2) and several tapN devices attached to the bridge, as many as there are worker instances on each worker host, which the qemu processes can attach to using -netdev tap. The worker instances must be able to communicate with each other and with the worker host web server which listens on the bridge interface (and has a random port number within a specified range). The traffic from the workers must be masqueraded to the external network.

These instructions are for openvswitch, as used by Fedora and SUSE: probably best to stick with it unless you know exactly what you're doing. os-autoinst has a helper service, os-autoinst-openvswitch, which isolates groups of workers on their own VLANs, so you don't have to worry about address collisions if you have more than one set of parallel jobs running at once (e.g. if you have a set of jobs which uses hardcoded static IPs, and it happens to run for two arches or images at once). The workers for each set of parallel jobs are assigned a different VLAN.

Install the packages:

dnf install os-autoinst-openvswitch
dnf install tunctl
dnf install iptables-services
dnf install network-scripts

Create the bridge config file, /etc/sysconfig/network-scripts/ifcfg-br0, with these contents:

DEVICETYPE='ovs'
TYPE='OVSBridge'
BOOTPROTO='static'
IPADDR='172.16.2.2'
NETMASK='255.254.0.0'
DEVICE=br0
STP=off
ONBOOT='yes'
NAME='br0'
HOTPLUG='no'

If you already have a br0, you can name the bridge something else: just change the value br0 wherever it appears in these instructions to your desired name, and set the environment variable OS_AUTOINST_USE_BRIDGE to your desired name when launching os-autoinst-openvswitch. You can do this by copying /usr/lib/systemd/system/os-autoinst-openvswitch.service to /etc/systemd/system/os-autoinst-openvswitch.service and changing the value in the Environment line, then running systemctl daemon-reload.

Create file /etc/sysconfig/os-autoinst-openvswitch, with these contents:

OS_AUTOINST_BRIDGE_LOCAL_IP=172.16.2.2
OS_AUTOINST_BRIDGE_REWRITE_TARGET=172.17.0.0

Create /etc/sysconfig/network-scripts/ifcfg-tap0, with these contents:

DEVICETYPE='ovs'
TYPE='OVSPort'
OVS_BRIDGE='br0'
DEVICE='tap0'
ONBOOT='yes'
BOOTPROTO='none'
HOTPLUG='no'

Create as many ifcfg-tapN files as you have workers, with the DEVICE changed appropriately - ifcfg-tap1, ifcfg-tap2 and so on. Note you cannot name the tap devices in any other way, and by default, openQA workers pick the tap device based on their number - worker 2 uses tap1, and so on. Tests can override this and specify a particular tap device, but when a test does not do this, the behaviour is not configurable.

Create /sbin/ifup-pre-local with these contents:

#!/bin/sh

# if the interface being brought up is tap[n], create
# the tap device first
if=$(echo "$1" | sed -e 's,ifcfg-,,')
tap=$(echo "$if" | sed -e 's,[0-9]\+$,,')
if [ "$tap" == "tap" ]; then
    tunctl -u _openqa-worker -p -t "$if"
fi

This will create the underlying device for the tap connections when they are brought up. Ensure the file is executable by root:

chmod ug+x /sbin/ifup-pre-local

Adjust the firewall configuration. For iptables, /etc/sysconfig/iptables should look like something like this, with enp2s0 changed to the name of whatever adapter you have connected to the outside world:

*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]

# allow ping and traceroute
-A INPUT -p icmp -j ACCEPT

# localhost is fine
-A INPUT -i lo -j ACCEPT

# Established connections allowed
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

# allow ssh - always
-A INPUT -m conntrack --ctstate NEW -m tcp -p tcp --dport 22 -j ACCEPT

# allow HTTP / HTTPS
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT

# allow port forwarding
-A FORWARD -i br0 -j ACCEPT
-A FORWARD -m state -i enp2s0 -o br0 --state RELATED,ESTABLISHED -j ACCEPT

# allow all traffic from br0
-A INPUT -i br0 -j ACCEPT

# otherwise kick everything out
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

*nat
# setup masquerade
-A POSTROUTING -o enp2s0 -j MASQUERADE
COMMIT

Disable firewalld, if you will use iptables:

systemctl disable firewalld.service; systemctl stop firewalld.service

If you want to use firewalld, figuring out how to configure it to allow forwarding and NAT from the openvswitch network is up to you.

Enable forwarding in sysctl:

sysctl -w net.ipv4.ip_forward=1

To make this permanent, edit /etc/sysctl.conf and add this line:

net.ipv4.ip_forward = 1

Enable all the networking-related services:

systemctl enable openvswitch.service network.service iptables.service os-autoinst-openvswitch.service
systemctl restart openvswitch.service network.service iptables.service os-autoinst-openvswitch.service