Features/VHostNet

From FedoraProject

< Features(Difference between revisions)
Jump to: navigation, search
(add to f12 category, add talk section, add mst full name)
m (fix bullets)
Line 67: Line 67:
  
 
Networking:
 
Networking:
- Various MTU sizes
+
* Various MTU sizes
- Broadcasts, multicasts,
+
* Broadcasts, multicasts,
- Ethtool                 
+
* Ethtool                 
- Latency tests           
+
* Latency tests           
- Bandwidth tests         
+
* Bandwidth tests         
- UDP testing             
+
* UDP testing             
- guest to guest communication
+
* Guest to guest communication
- More types of protocol testing
+
* More types of protocol testing
- guest vlans
+
* Guest vlans
- Tests combination of multiple vnics on the guests
+
* Tests combination of multiple vnics on the guests
- With/without {IP|TCP|UDP} offload
+
* With/without {IP|TCP|UDP} offload
  
 
Virtualization:
 
Virtualization:
- Live migration
+
* Live migration
  
 
Kernel side:
 
Kernel side:
- Load/unload driver
+
* Load/unload driver
  
 
== User Experience ==
 
== User Experience ==

Revision as of 12:37, 6 July 2009

Contents

Enable kernel acceleration for kvm networking

Summary

Enable kernel acceleration for kvm networking

Owner

Current status

  • Targeted release: Fedora 12
  • Last updated: 2009-06-06
  • Percentage of completion: 20%

Detailed Description

vhost net moves the task of converting virtio descriptors to skbs and back from qemu userspace to the kernel driver.

Benefit to Fedora

Using a kernel module reduces latency and improves packets per second for small packets.


Scope

The work is all upstream in the kernel and qemu. Guest code is already upstream. Host/qemu work is in progress. For Fedora 12 will likely have to backport some of it.

Milestones:

- Guest Kernel:

 MSI-X support in virtio net

- Host Kernel:

 iosignalfd, irqfd, eventfd polling
 finalize kernel/user interface    
 socket polling                    
 virtio transport with copy from/to user
 <- at this point can be used in production the rest are optimizations
    we will most likely need                                          
 mergeable buffers
 TX credits using destructor (or: poll device status)
 TSO/GSO                                             
 pin memory with get user pages                      
 profile and tune                                    

- qemu:

 MSI-X support in virtio net
 raw sockets support in qemu, promisc mode
 connect to kernel backend with MSI-X     
 migration                                
 PCI interrupts emulation                 
 <- at this point can be used in production
    the rest are optimizations we will most likely need
 programming MAC
 TSO/GSO        
 profile and tune


Test Plan

Networking:

  • Various MTU sizes
  • Broadcasts, multicasts,
  • Ethtool
  • Latency tests
  • Bandwidth tests
  • UDP testing
  • Guest to guest communication
  • More types of protocol testing
  • Guest vlans
  • Tests combination of multiple vnics on the guests
  • With/without {IP|TCP|UDP} offload

Virtualization:

  • Live migration

Kernel side:

  • Load/unload driver

User Experience

Users should see faster networking at least in cases of SRIOV or a dedicated per-guest network device.

Dependencies

  • kernel acceleration is implemented in the kernel rpm and depends on changes in qemu-kvm to work correctly.

Contingency Plan

  • We don't turn it on by default if it turns out to be unstable.

Documentation

Release Notes

Comments and Discussion