Multiqueue virtio-net
Summary
Multiqueue virtio-net provides an approach that scales the network performance as the increasing of the number of vcpus by allowing them to transfer packets through more than one virtqueue pairs.
Owner
- Name: Jason Wang
- Email: <jasowang@redhat.com>
- Name: Cole Robinson
- Email: <crobinso@redhat.com>
Current status
- Targeted release: Fedora 20
- Last updated: 2013-03-15
- Percentage of completion: 66%
Detailed Description
Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:
- The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.
- Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.
In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.
The following parts were changed to parallize the packet processing:
- tuntap: convert the driver to multiqueue by allowing multiple socket/fd to be attached to the device, each socket/fd exposed by the device could be treated as a queue.
- qemu:
- net: Add multiple queue infrastructure to qemu
- let qemu can create multiple vhost threads for a virtio-net device
- userspace multiple queue virtio-net
- guest driver: let the driver can use multiple virtqueues to do packet sending/receiving.
Benefit to Fedora
Improve the performance of the virtio-net with smp guests on Fedora Host.
Scope
- tuntap driver in kernel (DONE, in 3.8-rc)
- guest virtio-net driver (DONE, in 3.8-rc)
- qemu changes (DONE, in qemu 1.4)
- libvirt (Not done)
- virt-manager (optional, Not done)
How To Test
- hardware requirement: no special requirements but better have a multiqueue ethernet card to test
- qemu command line (e.g. you want to start a guest with 4 queues)
- qemu -netdev tap,id=hn0,queues=4,vhost=on -device virtio-net-pci,netdev=hn0,mq=on,vectors=9
- test cases:
- boot & reboot
- boot a guest with 2 queues then reboot
- enable multiqueue support in guest
- boot a guest with 2 queues
- enable the multiqueue support by ethtool -L $interface combined 2
- check whether 2 txqs and 2 rxqs are existed in /sys/class/net/$interface/queues
- run 20 cocurrent netperf test in guest: for i in
seq 20
do netperf -H $ip -t TCP_RR &; done - check packets goes to each queue by check interrupt are distrucited to each queue through /proc/interrupts in guest
- change back to single queue mode by ethtool -L $interface combined 1
- measure the 4 cocurrent netperf test in test again
- migration test
- boot a src vm with 2 queues
- boot a dst vm with 2 queues
- enable multiqueue in src guest by ethtool -L $interface combined 2
- scp a file from host or external host to a src vm
- migrate the vm from src to dst
- scp should finish as usual without any error
- set_link test
- boot a guest with 2 queues
- enable multiqueue in guest by ethtool -L $interface combined 2
- use set_link to off the link in qemu monitor
- check the network link is down in guest
- use set_link to on the link in qemu monitor
- check the network link is up in guest
- hotplug test
- boot a guest with 2 queues
- use netdev_del and device_del in qemu monitor to delete the virtio-net device
- check in guest and monitor (info network) that the device is removed
- use netdev_add tap,id=hn1,queues=2,vhost=on and device_add in the monitor to hot add a device in guest
- enable the multiqueue support in the new added deivce by ethtool -L $interface combined 2
- run 20 cocurrent netperf test in guest: for i in
seq 20
do netperf -H $ip -t TCP_RR &; done - check packets goes to each queue by check interrupt are distrucited to each queue through /proc/interrupts in guest
- pktgen stress test
- boot a guest with 2 queues
- enable the multiqueue support in the new added deivce by ethtool -L $interface combined 2
- use pktgen to generate the load on both queues in guest
- use pktgen to generate the load on both queues in host tap device
- other stress test
- boot and enable multiqueue in guest
- ordinary network stress test such as apache ab, netperf, stress ...
- boot & reboot
User Experience
The perfomrance of network application/server which use multiple sessions in parallel will be improved.
Dependencies
None
Contingency Plan
Since this is brand new functionality, if it isn't ready in time, nothing has changed. We just drop this feature page.
Documentation
- Docs in kvm wiki
- mq virtio-net patchset
- mq tuntap patchset
- mq qemu patchset
- Libvirt semi-proposal (march 12)
Release Notes
- KVM/qemu in Fedora can start a guest with multiqueue virtio-net support. And a Fedora guest have multiqueue virtio-net driver.
Comments and Discussion
None yet.