Test Day:2010-04-08 Virtualization VHostNet

From FedoraProject

Jump to: navigation, search

[edit] VhostNet Testing

[edit] Configuration

Guest : { RHEL3.9z(only 32) | RHEL4.8z | RHEL5.4z | WinXP(only 32) | WinVista | Win2k3 | Win2k8 | Win2k8R2 }

NicModel: { rtl8139 | virtio | e1000 }

Vhost : { on | off }

  1. modprobe vhost_net

$ qemu-kvm -drive file=$Guest.img,boot=on,if=... \

-net nic,model=$NicModel,netdev=foo \

-netdev tap,id=foo,ifname=msttap0,script=/ifup,downscript=no,vhost=$VHost

[edit] Testcases

  1. data transmission/receving
  2. ping testing
  3. jumbo frame test
  4. promisc mode testing
  5. PXE booting
  6. 802.1q vlan testing
  7. multicast testing
  8. fragmentation offload testing
  9. driver load/unload testing
  10. s4/s3 suspending for guest nics
  11. stress testing with netperf
  12. Multiple Nics Stress
  13. nic bonding test
1. data transmission/receving

    * create an 5G file in the host

dd if=/dev/zero of=zero bs=1024k count=5000

    * boot the virtual machine

qemu-kvm ......

    * determine the guest ip address: guest_ip
    * determine the host ip address: host_ip
    * create an 5G file in the guest:

dd if=/dev/zero of=zero bs=1024k count=5000

    * scp the host file into guest:

scp zero root@guest_ip:/tmp

    * in the mean time, scp the guest file into host:

scp zero root@host_ip:/tmp

    * wait until both transferring to complete
    * ping the guest in host:

ping -c 10 $guest_ip

    * ths packet loss ratio should be zero

2. ping test: ping the guest with different packetsize and interval
for each nic in the guest, do the following steps

    * determine the guest ip address: guest_ip
    * determine the ping counts: ping_count
    * packet_size = [ 0, 1, 48, 64, 512, 1440, 1500, 1505, 4096, 4192, 32767, 65507 ]
    * interval = [ 0.1, 0.01, 0.001, 0.0001 ]
    * ping the guest with the following commands:

ping guest_ip -s foreach packet_size -i foreach interval -c ping_count

    * determine the packet loss ratio, if the packet loss ratio is not zero, fail the whole testcases

    * ping the guest with the 65508 size

ping guest_ip -s 65508 -i foreach interval -c ping_count

    * determin the packet loss ratio, if the packet loss ratio is not one hundred, fail the whole testecases
    * flood ping the guest with 10 minutes and then ping the guest to see if could still get zero packet loss, if not, fail the test cases.

ping guest_ip -f -s foreach packet_size 

    * search panic/oops in the dmesg and /var/log/messages, if found any fail the whole testcases

3. jumbo frame testing: test the function of jumbo frame support

    * determine the nicmodel of the guest: nicmodel
    * determine the max frame length supported by the nicmodel: max_frame_length (mtu):
    * determine the max icmp packet length support by the nic_model: _max_icmp_packet_size = max_frame_length - 28:
    * packet_size = [ 0, 1, 48, 64, 512, 1440, 1500, 1505, 4096, 4192, 32767, 65507 ]
    * start the virtual machine with interface name of IFNAME
    * determine the bridge whcih IFNAME is connected: bridge

qemu-kvm -net nic,model=nicmodel,macaddr={macaddr},ifname=IFNAME -net tap ...

    * change the MTU of the tap device in the host

ifconfig IFNAME mtu 65521

    * change the MTU of the ethernet device in the guest

ifconfig eth0 mtu _max_frame_length_

    * if needed, add a dedicated route entry in the host [Resolution]

route add guest_ip IFNAME

    * if needed, add an static entry for guest nic

arp -s guest_ip guest_mac

    * ping the guest in the host with path mtu discovery

ping -M do guest_ip -s max_icmp_packet_size -c 10

    * check the packet loss ratio, if the ratio is not zero, fail the whole cases.

    * execute the following script in the guest:

while true; do ifconfig eth0 mtu 1500; ifconfig eth0 mtu max_fram_length done

and in the same time flood ping the guest in the host for five minutes.

ping -f -s max_icmp_packet_size

    * ping the guest in the host and see if the packet loss ratio is zero, if not, fail the whole case

ping -c 2 guest_ip

    * restore the mtu

ifconfig eth0 mtu 1500

4. promisc mode testing: transfering data in promisc mode

    * start the virtual machine
    * create an 1G file in the host

dd if=/dev/urandom of=/tmp/random bs=1024k count=1000 

    * get che checksum of the random file

md5sum /tmp/random

    * open a guest session and execute the following command to switch between promisc and non-promisc mode

while true; do
ifconfig eth0 promisc 
sleep 0.01
ifconfig eth0 -promisc
sleep 0.01
done

    * in the meantime, scp the random file to the guest

scp /tmp/random root@guest_ip:/tmp

    * get the md5sum in the guest

md5sum /tmp/random

    * compare the md5sum for both guest and host, if they are different, fail the whole testcase
    * kill the previous script in the guest
    * restore the nic to non-promisc mode

ifconfig eth0 -promisc

5. PXE booting

    * start the virtual machie with interface name IFNAME

qemu-kvm -net nic,ifname=IFNAME,... -net tap ...

    * using tcpdump to snoop the tftp traffic

tcpdump -l -n port 69 -i IFNAME > tcpdump

    * wait for 2 minutes and search tftp packet in the tcpdump

grep tftp tcpdump

    * if could find tftp in the tcpdump, testcase pass then fail

6. 802.1q vlan testing

    * boot two virtual machines
    * log into first virtual machine, and config the vlan through: ( join the vlan 10 )

vconfig add eth0 10;ifconfig eht0.10 192.168.123.11

    * log into second virtual machine, and config the vlan through: ( join the vlan 20 )

vconfig add eth0 20;ifconfig eth0.20 192.168.123.12

    * ping the second virtual machine in the first virtual machine and check the packet loss, if the packet loss is not 100%, fail the whole test case

ping -c 10 192.168.123.12

    * log into second virtual machine, and config the vlan through: (join the vlan 10)

vconfig rem eth0.20;vconfig add eth0 10;ifconfig eth0.10 192.168.123.12

    * ping the second virtual machine in the first virtualmachine and check the packet loss, if the packet loss is not 0%, fail the whole test case

    * for each guest (both first and second), remove the vlan config

vconfig rem eth0.10

7. multicast testing

    * boot a virtual machine

    * log into this virtual machine, and join it three multicast groups

ip maddr add 01:00:5e:c0:01:64 eth0
ip maddr add 02:00:5e:c0:01:64 eth0
ip maddr add 03:00:00:00:40:00 eth0


01:00:5e:c0:01:64 is a IP Multicast macaddress
02:00:5e:c0:01:64 is a General macaddress
03:00:00:00:40:00 is a Lan Manage Multicast mac address

    * listen the multicast packets using tcpdump on this virtual machine

tcpdump -ep ether multicast 2>/dev/null |tee /tmp/multicast.tmp 1>/dev/null

    * produce three random date packets and send to three multicast macaddress on host

packet1 --> 01:00:5e:c0:01:64
packet2 --> 02:00:5e:c0:01:64
packet3 --> 03:00:00:00:40:00

    * sleep 20 seconds, and kill tcpdump processer in the virtual machine

    * copy the tcpdump result to host, and check if the tcpdump's result contains that three packets.

grep content[i] /tmp/multicast.tmp

8. fragmentation offload test

    * boot a virtual machine

    * log into this virtual machine, and disable gso function of nic, only enable tso. if you check the function of gso, just disable tso and enable gso

ethtool -K eth0 gso off
ethtool -K etho tso on

    * check if success to setup tso/gso on guest

ethtool -k eth0

    * listen tcp packets on host

nc -l 5334 |tee /tmp/frag_offload.dd >/dev/null

    * use dd to create a 10M file on guest

dd if=/dev/urandom of=/tmp/frag_offload.dd bs=10M count=1

    * use tcpdump capture the packets on data link layer on guest

tcpdump -e port 5334 1 >/tmp/frag_offload.tcpdump 2>/dev/null

    * sent the 10M file to host

cat /tmp/frag_offlaod.dd |nc $hostIP 5334

    * check if there exists some packet larger than MTU

    * compute md5sum of /tmp/frag_offload.dd on host and the virtual machine, and check if they are same

md5sum /tmp/frag_offload.dd

9. driver load/unload testing

    * boot the guests and determine the driver for the guest nics: driver

ethtool -i eth0 | grep driver | awk '{print $2}'

    * download and install prozilla in the guests : may need libncurses5 ( libncurses5-dev )

wget http://10.66.70.67:3000/attachments/52/prozilla-1.3.7.3.tar.gz
tar -zxf prozilla-1.3.7.3.tar.gz
cd prozilla-1.3.7.3
./configure
make install

    * open a guest session and run the following scripts

while true; do
ifconfig eth0 down
sleep 0.1
modprobe -r driver
modprobe driver
ifconfig eth0 up
sleep 0.1
done

    * in the meantime, open a guest session and download the the kernel archives from lkml.org

proz -k 10 http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.31.5.tar.bz2

    * kill the scripts
    * tar the kernel archives, if any errors during the extracting, fail the whole testcase

tar -zxf linux-2.6.31.5.tar.bz2

10. s4/s3 suspending for guest nics

    * determine the ip address of guest: guest_ip
    * boot the guest
    * suspend the guest into s3/s4

echo mem|disk > /sys/power/state

    * waken from s3/s4 and test the virtual nics through ping, and determine the packet loss ratio, if the ratio is not zero, fail the whole test

ping _guest_ip -c 10

11. stress testing with netperf

    * boot the virtual machine
    * determine the test time for stress t seconds
    * determine the ip address of the guest: ip_guest
    * determine the ip address of the host: ip_host
    * install netperf2 in both guest and host
    * run netperf server in guest

./netserver

    * run netperf client in the host to test TCP

./netperf -H ip_guest -l t

    * wait until the netperf to finish
    * run netperf client in host to test UDP

./netperf -H ip_guest -l t -t UDP_STREAM

    * wait until the netperf to finish
    * run netperf server in the host

./netserver

    * run the netperf client in the guest

./netperf -H ip_host -l t

    * wait until the netperf to finish
    * run netperf client in host to test UDP

./netperf -H ip_host -l t -t UDP_STREAM

    * wait until the netperf to finish
    * ping the guest in host, the packet loss should be zero

ping -c 10 ip_guest

    * search dmesg in guest and see if there have calltrace

12. Multiple Nics Stress

    * determine the test time: testtime
    * boot the virtual machine with three different modles of virtual nics: 8139,virtio,e1000

qemu-kvm -net nic,model=rtl8139... -net tap -net nic,model=e1000,... -net tap...... -net nic,model=virtio,...... -net tap...... ...

    * Adjust the arp policy: use the dedicated interface hw addr to announce and response the arp packet

echo 2 > /proc/sys/net/ipv4/conf/default/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/default/arp_announce

    * Set the max MTU for each nics

ifconfig eth0 mtu 1500 ( for rtl8139 )
ifconfig eth1 mtu 16110 ( for e1000 )
ifconfig eth2 mtu 65535 ( for virtio )

    * determine the guest ip address: ip1 ip2 ip3
    * In the host, parallel flood ping each nics through the following commands and last for testtime seconds.

ping -f ip1 -s {size from 0 to 1500}
ping -f ip2 -s {size from 0 to 16110}
ping -f ip3 -s {size form 0 to 65507}

    * ping each nics to find whether it still work, the packet loss should be zero.

ping -c 10 ip1
ping -c 10 ip2
ping -c 10 ip3

13. nic bonding test

    * determine the four nic models used in the test model1,model2,model3,model4

qemu-kvm -net nic,model=model1,vlan=0 -net tap,vlan=0 -net nic,model=model2,vlan=1 -net tap,vlan=1 -net nic,model=model3,vlan=2 -net tap,vlan=2 -net nic,model=model4,vlan=3 -net tap,vlan=3 -m 1024 ......

    * configure the bonding interfaces configuration files:

/etc/sysconfig/network-scripts/ifcfg-{ethnum}

DEVICE={ethnum}
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none

/etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0
BOOTPROTO=dhcp
ONBOOT=yes
USERCTL=no

    * modprobe the bonding module

modprobe bonding

    * re-start the network interfaces

service netowrk restart

    * use ifconfig to see if bond0 device have been setup and get the ip address ip_bond0

ifconfig bond0

    * in the guest run the following scripts to down/up the interface eth0,eth1,eth2,eth3

while true do;
ifconfig eth0 down
ifconfig eth0 up
ifconfig eth1 down 
ifconfig eth1 up
ifconfig eth2 down
ifconfig eth2 up
ifconfig eth3 down
ifconfig eth3 up
done

    * in the mean time ping the guest in host with the following command, the packet loss ratio should be zero

ping _ip_bond0_ -c 1000

    * then flood ping the guest for two minutes

ping -f _ip_bond0_ 

    * and then ping the guest, the packet loss ratio should be zero

ping -c 100 _ip_bond0_

    * the packet loss should be zero.