From Fedora Project Wiki
(Current status)
Line 95: Line 95:
<!-- Is there upstream documentation on this feature, or notes you have written yourself?  Link to that material here so other interested developers can get involved. -->
<!-- Is there upstream documentation on this feature, or notes you have written yourself?  Link to that material here so other interested developers can get involved. -->
* [ Docs in kvm wiki]
* [ Docs in kvm wiki]
* [
* [ mq virtio-net patchset]
* [ mq tuntap patchset]
* [ mq qemu patchset]
== Release Notes ==
== Release Notes ==

Revision as of 10:19, 29 January 2013

Comments and Explanations
The page source contains comments providing guidance to fill out each section. They are invisible when viewing this page. To read it, choose the "edit" link.
Copy the source to a new page before making changes! DO NOT EDIT THIS TEMPLATE FOR YOUR FEATURE.
Set a Page Watch
Make sure you click watch on your new page so that you are notified of changes to it by others, including the Feature Wrangler
All sections of this template are required for review by FESCo. If any sections are empty it will not be reviewed

Multiqueue virtio-net


Multiqueue virtio-net provides an approach that scales the network performance as the increasing of the number of vcpus by allowing them to transfer packets through more than one virtqueue pairs.


  • Email: <>

Current status

  • Targeted release: Fedora 19
  • Last updated: Jan 29th 2013
  • Percentage of completion: 66%

Detailed Description

Today's high-end server have more processors, guests running on them tend have an increasing number of vcpus. The scale of the protocol stack in guest in restricted because of the single queue virtio-net:

  • The network performance does not scale as the number of vcpus increasing: Guest can not transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Even through there's software technology to spread the loads into different processor such as RFS, such kind of method is only for transmission and is really expensive in guest as they depends on IPI which may brings extra overhead in virtualized environment.
  • Multiqueue nic were more common used and is well supported by linux kernel, but current virtual nic can not utilize the multi queue support: the tap and virtio-net backend must serialize the co-current transmission/receiving request comes from different cpus.

In order the remove those bottlenecks, we must allow the paralleled packet processing by introducing multi queue support for both back-end and guest drivers. Ideally, we may let the packet handing be done by processors in parallel without interleaving and scale the network performance as the number of vcpus increasing.

The following parts were changed to parallize the packet processing:

  • tuntap: convert the driver to multiqueue by allowing multiple socket/fd to be attached to the device, each socket/fd exposed by the device could be treated as a queue.
  • qemu:
    • net: Add multiple queue infrastructure to qemu
    • let qemu can create multiple vhost threads for a virtio-net device
    • userspace multiple queue virtio-net
  • guest driver: let the driver can use multiple virtqueues to do packet sending/receiving.

Benefit to Fedora

Improve the performance of the virtio-net with smp guests on Fedora Host.


  • tuntap driver in kernel (DONE, in 3.8-rc)
  • guest virtio-net driver (DONE, in 3.8-rc)
  • qemu changes (patch posted, for qemu 1.4)
  • Apps (all optional but would be nice if they are done)
    • libvirt (Not done)
    • virt-manager (Not done)

How To Test

User Experience

The perfomrance of network application/server which use multiple sessions in parallel will be improved.



Contingency Plan

Since this is brand new functionality, if it isn't ready in time, nothing has changed. We just drop this feature page.


Release Notes

Comments and Discussion