From Fedora Project Wiki

Virt Storage Migration

Summary

Migrate a running virtual machine from one host to another, including in use storage, with no downtime. No need for a shared storage location between the two.

Owner

Current status

  • Targeted release: Fedora 19
  • Last updated: 2013-05-15
  • Percentage of completion: 100%

Detailed Description

Live migration of a VM has been around for a while, but usage historically required that VM storage disk images were shared between the source and destination host, and mounted in the same location.

Since qemu 0.12 (December 2009), there has been a storage migration feature in qemu, but it was inflexible, and inefficient to the point that any workload in the guest would often prevent the guest from ever being full migrated. While supported in libvirt/virsh, it was still difficult to use, requiring stub disk images to be present on the destination host.

New developments in QEMU allow migrating a VM with no shared storage between the source and destination, and does it in a performant manner.

Benefit to Fedora

This feature is equivalent to VMWare's "storage vmotion" feature, so brings Fedora virt closer in functionality to the proprietary alternative. Plus, it's a cool feature that makes migration much simpler to try out.

Scope

  • Qemu block streaming and internal nbd sharing (DONE, in 1.3)
  • Libvirt/virsh support (DONE, in libvirt 1.0.3)
  • virt-manager support (optional, not done)

How To Test

User Experience

Virt users who are already migrating guests will no longer require shared storage. Anyone interested in trying migration does not need to setup shared storage.

Dependencies

None.

Contingency Plan

Since this is brand new functionality, if it doesn't make it in time for F19, nothing has changed. We just drop this feature page.

Documentation

Release Notes

KVM and Libvirt now support a performant way to live migrate virtual machines with no shared storage between the hosts. A running VM and it's disk images are relocated to a new machine with no downtime.

Comments and Discussion

None yet.