SimpleHekaFS

From FedoraProject

Revision as of 18:57, 9 September 2011 by Kkeithle (Talk | contribs)

Jump to: navigation, search

Contents

Setting up a simple HekaFS cluster

Hypervisor/Host

I used an 8 CPU, 24GB RAM machine, with a qlogic FC HBA to back-end storage, running F15.

Brick node guests on host

I created four guests running F15 to use a bricks (servers):

  • F15node1 (192.168.122.21)
  • F15node2 (192.168.122.22)
  • F15node3 (192.168.122.23)
  • F15node4 (192.168.122.24)

You may create as many as you want for your setup.

Client node guest(s) on host

I created a single guest as a client:

  • F15node5 (192.168.122.25)

Back-end storage

I provisioned 40 5G LUNs on a NAS device connected to the qlogic fibre HBA. This gives me 10 LUNs per brick. Size and number of LUNs you use is up to you. Add the LUNs to the bricks using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/sdxx, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other parameters at your discretion.

Thus on the hypervisor/host I have /dev/sda, /dev/sdb, and dev/sdc which are real drives in the box, and /dev/sdd ... /dev/sdaq which are the iSCSI LUNs; /dev/sdd maps to /dev/sda on the first brick (F15node1), /dev/sdm maps to /dev/sda on the second brick (F15node2), /dev/sdx maps to /dev/sda on the third brick (F15node3), and /dev/sdah maps to /dev/sda on the fourth brick (F15node4).

Important Links

the nitty gritty

  • download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
  • install the RPMs on all brick nodes and client nodes
    • If you use the hekafs RPM on RHEL, change line 23 of /etc/init.d/hekafsd from python2.7 to python2.6 after installing. This will be fixed in the next release of the hekafs RPM.
  • on each brick node make filesystems for each of the LUNs and mount them, e.g. using ext4:
    • for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
    • for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done
    • for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done
    • optional make /etc/fstab entries for the mounts
    • Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.
  • on each brick node open ports in the firewall using the firewall admin utility:
    • open port 8080 tcp (Other Ports)
    • open ports 24007-24029 tcp (Other Ports)
  • set up gluster on each brick node
    1. on each brick node enable glusterfsd, glusterd, and hekafsd:
      • chkconfig glusterfsd on
      • chkconfig glusterd on
      • chkconfig hekafsd on
    2. on each brick node start glusterd and hekafsd:
      • service glusterd start
      • service hekafsd start
    3. open a browser window to principal node port 8080 (http:192.168.122.21:8080). we use Google Chrome, in the past Firefox seemed not to work well, YMMV.
    4. configure the nodes in your cluster
      • select Manage Servers
      • the IP address of the first or principal node is already listed
      • enter the IP address or node name and press Add
      • click Back to cluster configuration
      • repeat for each node in your cluster
      • press Done
    5. configure one or more volumes in your cluster
      • select Manage Volumes
      • As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
      • tick the checkbox for /bricks/sda on each node
      • leave Volume Type: set to Plain
      • leave Replica or Strip count blank (unset)
      • enter testsda for Volume ID
      • press Provision
      • add_local(testsda) OK ... is displayed for all four nodes
      • click Back to volume configuration
      • testsda is now shown in the list of Existing Volumes
      • repeat as desired for additional volumes
      • use the Back button in your browser to return to the Configuration Main menu
    6. configure one or more tenants in your cluster
      • select Manage Tenants
      • enter bob as the Tenant Name
      • enter carol as the Tenant Password
      • enter 10000 as the Tenant UID Range: Low
      • enter 10999 as the Tenant GID Range: High
      • enter 10000 as the Tenant GID Range: Low
      • enter 10999 as the Tenant UID Range: High
      • press Add
      • add_local(bob) OK ... is displayed for all four nodes
      • click Back to tenant configuration
      • bob is now shown in the list of Existing Tenants
      • repeat as desired for additional tenants
      • click volumes in the entry for bob
      • testsda is shown in the Volume List
      • tick the Enabled checkbox for testsda
      • press Update
      • Volumes enabled for bob ... is displayed for all four nodes
      • click Back to tenant configuration
    7. start the volume(s)
      • use the Back button in your browser to return to the Configuration Main menu
      • select Manage Volumes
      • click start testsda entry in the list of Existing Volumes
      • start_local(testsda) returned 0 ... is displayed for all four nodes
  • mount the volume(s) on the client(s)
    • sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda