High Availability Container Resources
The Container Resources feature allows the HA stack (Pacemaker + Corosync) residing on a host machine to extend management of resources into virtual guest instances (KVM/LXC).
- Name: David Vossel
- Email: <email@example.com>
- Targeted release: Fedora 19
- Last updated: 1-25-2013
- Percentage of completion: 60%
This feature is in response to the growing desire for high availability functionality to be extended outside of the host into virtual guest instances. Pacemaker is currently capable of managing virtual guests, meaning Pacemaker can start/stop/monitor/migrate virtual guests anywhere in the cluster, but Pacemaker has no ability to manage the resources that live within the virtual guests. At the moment these virtual guests are very much a black box to Pacemaker.
The Container Resources feature changes this by giving Pacemaker the ability to reach into the virtual guests and manage resources in the exact same way resources are managed on the host nodes. Ultimately this gives the HA stack the ability to manage resources across all the nodes in cluster as well as any virtual guests that reside within those cluster nodes.
Benefit to Fedora
This feature expands our current high availability functionality to span across both physical bare-metal cluster nodes and the virtual environments that reside within those nodes. Without this feature, there is currently no direct approach for achieving this functionality.
Pacemaker's existing LRMD must be modified to allow the client and server to communicate remotely over tcp. TLS will be used with PSK encryption/authentication to secure the connection between the LRMD client and server. A standalone version of the LRMD running a tls backend on a virtual guest will allow the CRMD on a host machine to manage resources within the virtual guest.
Pengine Container Resource Support:
Pacemaker's policy engine component needs to be able to understand how to represent and manage container resources. This is as simple as the policy engine understanding that container resources are both a resource and a location other resources can be placed after the container resource has started. The policy engine will contain routing information in each resource action to specify which LRMD an action should go to and on what node. This is how the CRMD will know to route certain actions to a remote LRMD instance living on a virtual guest rather than the local LRMD instance living on the host.
Pacemaker's CRMD component must now be capable of routing LRMD commands to both the local LRMD and remote LRMD instances residing on virtual guests.
LXC Resource Agent:
A fedora supported LXC resource agent must be created. It is likely that there is already open source community support for such an agent somewhere that this can be based off.
PCS management support for Container Resources:
The pcs management tool for pacemaker needs to support whatever configuration mechanism is settled upon for representing container resources in the CIB.
How To Test
The exact configuration steps necessary to test this feature are not defined. Below are general steps that will be necessary to configure and test the Container Resource on a HA cluster node.
1. Create virtual guest instance (kvm virtual machine or lxc container).
2. Configure standalone LRMD daemon to launch in virtual guest on startup.
3. Using pcs on host node, define virtual guest resource and mark it as a container resource.
4. Using pcs on host node, define Dummy resource and mark it as a child resource of the container resource.
5. Once Pacemaker finishes starting the virtual guest resource, the Dummy resource should be launched and monitored within the guest.
Users will be able to define resources as container resources. Once a container resource is defined, that resource becomes a location capable of running resources just as if it was another cluster node.
No new Pacemaker dependencies will be required for this feature. A list of the current dependencies can be found here. 
If this feature is not complete by development freeze, the functionality necessary to configure this feature will not be enabled in the stable configuration scheme. The feature is designed in such a way that it should be able to be disabled without without negatively affecting any existing functionality.