From Fedora Project Wiki
No edit summary
(add to virt category, add sort keys)
Line 107: Line 107:
* See [[Talk:Features/VHostNet]]
* See [[Talk:Features/VHostNet]]


 
[[Category:Virtualization|VHostNet]]
[[Category:F12 Virt Features]]
[[Category:F12 Virt Features|VHostNet]]
[[Category:FeaturePageIncomplete]]
[[Category:FeaturePageIncomplete]]
<!-- Category:FeatureReadyForWrangler -->
<!-- Category:FeatureReadyForWrangler -->

Revision as of 15:39, 8 July 2009

Enable kernel acceleration for kvm networking

Summary

Enable kernel acceleration for kvm networking

Owner

Current status

  • Targeted release: Fedora 12
  • Last updated: 2009-06-06
  • Percentage of completion: 20%

Detailed Description

vhost net moves the task of converting virtio descriptors to skbs and back from qemu userspace to the kernel driver.

Benefit to Fedora

Using a kernel module reduces latency and improves packets per second for small packets.


Scope

The work is all upstream in the kernel and qemu. Guest code is already upstream. Host/qemu work is in progress. For Fedora 12 will likely have to backport some of it.

Milestones:

- Guest Kernel:

 MSI-X support in virtio net

- Host Kernel:

 iosignalfd, irqfd, eventfd polling
 finalize kernel/user interface    
 socket polling                    
 virtio transport with copy from/to user
 <- at this point can be used in production the rest are optimizations
    we will most likely need                                          
 mergeable buffers
 TX credits using destructor (or: poll device status)
 TSO/GSO                                             
 pin memory with get user pages                      
 profile and tune                                    

- qemu:

 MSI-X support in virtio net
 raw sockets support in qemu, promisc mode
 connect to kernel backend with MSI-X     
 migration                                
 PCI interrupts emulation                 
 <- at this point can be used in production
    the rest are optimizations we will most likely need
 programming MAC
 TSO/GSO        
 profile and tune


Test Plan

Guest:

  • WHQL networking tests

Networking:

  • Various MTU sizes
  • Broadcasts, multicasts,
  • Ethtool
  • Latency tests
  • Bandwidth tests
  • UDP testing
  • Guest to guest communication
  • More types of protocol testing
  • Guest vlans
  • Tests combination of multiple vnics on the guests
  • With/without {IP|TCP|UDP} offload

Virtualization:

  • Live migration

Kernel side:

  • Load/unload driver

User Experience

Users should see faster networking at least in cases of SRIOV or a dedicated per-guest network device.

Dependencies

  • kernel acceleration is implemented in the kernel rpm and depends on changes in qemu-kvm to work correctly.

Contingency Plan

  • We don't turn it on by default if it turns out to be unstable.

Documentation

Release Notes

Comments and Discussion