SimpleHekaFS

Hypervisor/Host
I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.

Brick node guests on host
I created four guests running F15 to use as brick nodes (servers):


 * F15node1 (192.168.122.21)
 * F15node2 (192.168.122.22)
 * F15node3 (192.168.122.23)
 * F15node4 (192.168.122.24)

(You may create as many as you want for your set-up.)

Client node guest(s) on host
I created a single guest as a client:


 * F15node5 (192.168.122.25)

(You may create more than one.)

Back-end storage
I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.

Important Links

 * HekaFS git repo is git://git.fedorahosted.org/CloudFS.git
 * F16 glusterfs RPMs are at http://koji.fedoraproject.org/koji/packageinfo?packageID=5443
 * F16 hekafs RPM is at http://koji.fedoraproject.org/koji/packageinfo?packageID=12428
 * HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS
 * F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS
 * Jeff Darcy's HekaFS.org blog at http://hekafs.org/blog/

The nitty gritty

 * On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
 * download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
 * install the RPMs on all server nodes and client nodes
 * on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
 * optional make /etc/fstab entries for the mounts
 * Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.
 * on each server node open ports in the firewall using the firewall admin utility:
 * open port 8080 tcp (Other Ports)
 * open ports 24007-24029 tcp (Other Ports)
 * set up gluster on each brick node
 * on each server node enable glusterfsd, glusterd, and hekafsd:
 * on each server node start glusterd and hekafsd:
 * open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
 * configure the nodes in your cluster
 * select Manage Servers
 * the IP address of the first or principal node is already listed
 * enter the IP address or node name and press Add
 * click Back to cluster configuration
 * repeat for each node in your cluster
 * press Done
 * configure one or more volumes in your cluster
 * select Manage Volumes
 * As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
 * tick the checkbox for /bricks/sda on each node
 * leave Volume Type: set to Plain
 * leave Replica or Strip count blank (unset)
 * enter  for Volume ID
 * press Provision
 * add_local(testsda) OK ... is displayed for all four nodes
 * click Back to volume configuration
 * testsda is now shown in the list of Existing Volumes
 * repeat as desired for additional volumes
 * press Done
 * configure one or more tenants in your cluster
 * select Manage Tenants
 * enter  as the Tenant Name
 * enter  as the Tenant Password
 * enter  as the Tenant UID Range: Low
 * enter  as the Tenant GID Range: High
 * enter  as the Tenant GID Range: Low
 * enter  as the Tenant UID Range: High
 * press Add
 * add_local(bob) OK ... is displayed for all four nodes
 * click Back to tenant configuration
 * bob is now shown in the list of Existing Tenants
 * repeat as desired for additional tenants
 * click volumes in the entry for bob
 * testsda is shown in the Volume List
 * tick the Enabled checkbox for testsda
 * press Update
 * Volumes enabled for bob ... is displayed for all four nodes
 * click Back to tenant configuration
 * start the volume(s)
 * press Done
 * select Manage Volumes
 * click start testsda entry in the list of Existing Volumes
 * start_local(testsda) returned 0 ... is displayed for all four nodes
 * mount the volume(s) on the client(s)
 * Volumes enabled for bob ... is displayed for all four nodes
 * click Back to tenant configuration
 * start the volume(s)
 * press Done
 * select Manage Volumes
 * click start testsda entry in the list of Existing Volumes
 * start_local(testsda) returned 0 ... is displayed for all four nodes
 * mount the volume(s) on the client(s)