SimpleHekaFS

From FedoraProject

(Difference between revisions)
Jump to: navigation, search
(Simple HekaFS cluster setup howto)
 
 
(18 intermediate revisions by one user not shown)
Line 1: Line 1:
Setting up a simple HekaFS cluster
+
==Setting up a simple HekaFS cluster==
  
Hypervisor/Host:
+
===Hypervisor/Host===
  
    8 CPU, 24GB RAM, FC HBA to back-end storage, running F15.
+
I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.
  
Brick node guests on host:
+
===Brick node guests on host===
  
    F15node01 (192.168.122.21)
+
I created four guests running F15 to use as brick nodes (servers):
    F15node02 (192.168.122.22)
+
    F15node03 (192.168.122.23)
+
    F15node04 (192.168.122.24)
+
  
Client node guest(s) on host :
+
* F15node1 (192.168.122.21)
 +
* F15node2 (192.168.122.22)
 +
* F15node3 (192.168.122.23)
 +
* F15node4 (192.168.122.24)
  
    F15node05 (192.168.122.25)
+
(You may create as many as you want for your set-up.)
  
N.B. all guest nodes are running F15.
+
===Client node guest(s) on host===
  
Back-end storage:
+
I created a single guest as a client:
  
    40 5G LUNs on qlogic fibrechannel, provisioned as SCSI disks, 10 per brick. N.B. Size and number of LUNs is arbitrary.
+
* F15node5 (192.168.122.25)
  
Important Links:
+
(You may create more than one.)
  
    cloudfs git repo is git://git.fedorahosted.org/CloudFS.git
+
===Back-end storage===
    F16 glusterfs RPMs are https://koji.fedoraproject.org/koji/buildinfo?buildID=259916
+
    F16 hekafs RPM is https://koji.fedoraproject.org/koji/buildinfo?buildID=259462
+
    HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS
+
    F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS
+
  
If you use the hekafs RPM on RHEL, change line 23 of /etc/init.d/hekafsd from python2.7 to python2.6 after installing.
+
I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.
  
the nitty gritty:
+
===Important Links===
  
    download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
+
* HekaFS git repo is git://git.fedorahosted.org/CloudFS.git
    install the RPMs on all brick nodes and client nodes
+
* F16 glusterfs RPMs are at http://koji.fedoraproject.org/koji/packageinfo?packageID=5443
    make filesystems on the iSCSI LUNs and mount them, e.g. using ext4:
+
* F16 hekafs RPM is at http://koji.fedoraproject.org/koji/packageinfo?packageID=12428
        for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
+
* HekaFS wiki page is https://fedoraproject.org/wiki/HekaFS
        for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done
+
* F16 HekaFS Feature wiki page is https://fedoraproject.org/wiki/Features/HekaFS
        for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done
+
* Jeff Darcy's HekaFS.org blog at http://hekafs.org/blog/
        optional make /etc/fstab entries for the mounts
+
 
        Note: if you use qemu/kvm guests as bricks and use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough) and the guests will usually require manual intervention to boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.  
+
===The nitty gritty===
    open ports in the firewall using the firewall admin utility:
+
 
        on all bricks, open port 8080 tcp (Other Ports)
+
* On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
        on all bricks, open ports 24007-24029 tcp (Other Ports)
+
* download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
    setup gluster on the brick nodes
+
* install the RPMs on all server nodes and client nodes
        enable glusterfsd, glusterd, and hekafsd on each brick:
+
* on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
            chkconfig glusterfsd on
+
** <code>for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done</code>
            chkconfig glusterd on
+
** <code>for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done</code>
            chkconfig hekafsd on  
+
** <code>for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done</code>
        start glusterd and hekafsd on each brick:
+
** optional make /etc/fstab entries for the mounts
            service glusterd start
+
** Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.
            service hekafsd start  
+
* on each server node open ports in the firewall using the firewall admin utility:
        open a browser window to principal node port 8080. we use Google Chrome, Firefox seems not to work well
+
** open port 8080 tcp (Other Ports)
        configure nodes in your cluster
+
** open ports 24007-24029 tcp (Other Ports)
            select Manage Servers
+
* set up gluster on each brick node
            the IP address of the first or principal node is already listed
+
*# on each server node enable glusterfsd, glusterd, and hekafsd:
            enter the IP address or node name and press Add
+
*#* <code>chkconfig glusterfsd on</code>
            click 'Back to cluster configuration'
+
*#* <code>chkconfig glusterd on</code>
            repeat for each node in your cluster
+
*#* <code>chkconfig hekafsd on </code>
            press Done
+
*# on each server node start glusterd and hekafsd:
        configure one or more volumes in your cluster
+
*#* <code>service glusterd start</code>
            select Manage Volumes
+
*#* <code>service hekafsd start</code>
            As per above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
+
*# open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
            tick the checkbox for /bricks/sda on each node
+
*# configure the nodes in your cluster
            leave Volume Type: set to Plain
+
*#* select ''Manage Servers''
            leave Replica or Strip count unset
+
*#* the IP address of the first or principal node is already listed
            enter testsda for Volume ID
+
*#* enter the IP address or node name and press '''Add'''
            press Provision
+
*#* click ''Back to cluster configuration''
            add_local(testsda) OK ... is displayed for each node
+
*#* repeat for each node in your cluster
            click 'Back to volume configuration'
+
*#* press '''Done'''
            testsda is now shown in the list of Existing Volumes
+
*# configure one or more volumes in your cluster
            repeat as desired for additional volumes
+
*#* select ''Manage Volumes''
            use the Back button in your browser to return to the Configuration Main menu
+
*#* As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
        configure one or more tenants in your cluster
+
*#* tick the checkbox for /bricks/sda on each node
            select Manage Tenants
+
*#* leave Volume Type: set to Plain
            enter bob as the Tenant Name
+
*#* leave Replica or Strip count blank (unset)
            enter carol as the Tenant Password
+
*#* enter <code>testsda</code> for Volume ID
            enter 10000 as the Tenant UID Range: Low
+
*#* press '''Provision'''
            enter 10999 as the Tenant GID Range: High
+
*#* add_local(testsda) OK ... is displayed for all four nodes
            enter 10000 as the Tenant GID Range: Low
+
*#* click ''Back to volume configuration''
            enter 10999 as the Tenant UID Range: High
+
*#* testsda is now shown in the list of Existing Volumes
            press Add
+
*#* repeat as desired for additional volumes
            add_local(bob) OK ... is displayed for each node
+
*#* press '''Done'''
            click 'Back to tenant configuration'
+
*# configure one or more tenants in your cluster
            bob is now shown in the list of Existing Tenants
+
*#* select ''Manage Tenants''
            repeat as desired for additional tenants
+
*#* enter <code>bob</code> as the Tenant Name
            click 'volumes' in the entry for bob
+
*#* enter <code>carol</code> as the Tenant Password
            testsda is shown in the Volume List
+
*#* enter <code>10000</code> as the Tenant UID Range: Low
            tick the Enabled checkbox for testsda
+
*#* enter <code>10999</code> as the Tenant GID Range: High
            press Update
+
*#* enter <code>10000</code> as the Tenant GID Range: Low
            Volumes enabled for bob ... is displayed for each node
+
*#* enter <code>10999</code> as the Tenant UID Range: High
            click 'Back to tenant configuration'
+
*#* press '''Add'''
        start the volume(s)
+
*#* add_local(bob) OK ... is displayed for all four nodes
            use the Back button in your browser to return to the Configuration Main menu
+
*#* click ''Back to tenant configuration''
            select Manage Volumes
+
*#* bob is now shown in the list of Existing Tenants
            click 'start' testsda entry in the list of Existing Volumes
+
*#* repeat as desired for additional tenants
            start_local(testsda) returned 0 ... is displayed for each node
+
*#* click ''volumes'' in the entry for bob
    mount the volume(s) on the client(s)
+
*#* testsda is shown in the Volume List
        sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda
+
*#* tick the Enabled checkbox for testsda
    Treat yourself to a beer.
+
*#* press '''Update'''
 +
*#* Volumes enabled for bob ... is displayed for all four nodes
 +
*#* click ''Back to tenant configuration''
 +
*# start the volume(s)
 +
*#* press '''Done'''
 +
*#* select ''Manage Volumes''
 +
*#* click ''start'' testsda entry in the list of Existing Volumes
 +
*#* start_local(testsda) returned 0 ... is displayed for all four nodes
 +
* mount the volume(s) on the client(s)
 +
** <code>sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda</code>

Latest revision as of 13:23, 14 November 2011

Contents

[edit] Setting up a simple HekaFS cluster

[edit] Hypervisor/Host

I'm using an 8 CPU, 24GB RAM machine running F15, with a qlogic FC HBA to back-end NAS.

[edit] Brick node guests on host

I created four guests running F15 to use as brick nodes (servers):

  • F15node1 (192.168.122.21)
  • F15node2 (192.168.122.22)
  • F15node3 (192.168.122.23)
  • F15node4 (192.168.122.24)

(You may create as many as you want for your set-up.)

[edit] Client node guest(s) on host

I created a single guest as a client:

  • F15node5 (192.168.122.25)

(You may create more than one.)

[edit] Back-end storage

I provisioned eight 5G LUNs on a NAS device, allowing in this case for two LUNs per brick. The actual size and number of LUNs you use is up to you. Because the LUNs are attached in an apparently random order on every boot, I used the Disk Utility to add labels to each LUN. I labeled them guest1vol0, guest1vol1, guest2vol0, guest2vol1, guest3vol0, guest3vol1, guest4vol0, and guest4vol1. Add the LUNs to the guest brick VMs using Add Hardware->Storage in the guest detail window. Choose 'Select managed or other existing storage', enter /dev/disk/by-label/guestXvolY, Device Type: SCSI disk, Cache mode: none, Storage format: raw. Use other Cache mode and Storage format options at your discretion.

[edit] Important Links

[edit] The nitty gritty

  • On each server node, copy root's .ssh/id_dsa.pub to (root's) .ssh/authorized_keys file. Ensure that root can ssh between nodes without requiring a password to be entered.
  • download the glusterfs, glusterfs-server, glusterfs-fuse, and hekafs RPMs
  • install the RPMs on all server nodes and client nodes
  • on each server node make brick file systems for each of the LUNs and mount them, e.g. using ext4:
    • for lun in /dev/sd? ; do sudo mkfs.ext4 $lun; done
    • for dev in /dev/sd? ; do sudo mkdir -p /bricks/`basename $dev`; done
    • for dev in /dev/sd? ; do sudo mount $dev /bricks/`basename $dev`; done
    • optional make /etc/fstab entries for the mounts
    • Note: if you use xfs on iSCSI LUNs shared from the qemu/kvm host, the guests will not always probe and initialize the iSCSI LUNs correctly (fast enough?) and the guests will usually require manual intervention during boot. Unmount all bricks, run xfs_repair on each iSCSI device (e.g. /dev/sda), mount the bricks, and `exit` to continue to boot.
  • on each server node open ports in the firewall using the firewall admin utility:
    • open port 8080 tcp (Other Ports)
    • open ports 24007-24029 tcp (Other Ports)
  • set up gluster on each brick node
    1. on each server node enable glusterfsd, glusterd, and hekafsd:
      • chkconfig glusterfsd on
      • chkconfig glusterd on
      • chkconfig hekafsd on
    2. on each server node start glusterd and hekafsd:
      • service glusterd start
      • service hekafsd start
    3. open a browser window to principal server node port 8080 (http:192.168.122.21:8080).
    4. configure the nodes in your cluster
      • select Manage Servers
      • the IP address of the first or principal node is already listed
      • enter the IP address or node name and press Add
      • click Back to cluster configuration
      • repeat for each node in your cluster
      • press Done
    5. configure one or more volumes in your cluster
      • select Manage Volumes
      • As described above, each node in my cluster has ten volumes: /dev/sda ... /dev/sdj, mounted on /bricks/sda ... /bricks/sdj
      • tick the checkbox for /bricks/sda on each node
      • leave Volume Type: set to Plain
      • leave Replica or Strip count blank (unset)
      • enter testsda for Volume ID
      • press Provision
      • add_local(testsda) OK ... is displayed for all four nodes
      • click Back to volume configuration
      • testsda is now shown in the list of Existing Volumes
      • repeat as desired for additional volumes
      • press Done
    6. configure one or more tenants in your cluster
      • select Manage Tenants
      • enter bob as the Tenant Name
      • enter carol as the Tenant Password
      • enter 10000 as the Tenant UID Range: Low
      • enter 10999 as the Tenant GID Range: High
      • enter 10000 as the Tenant GID Range: Low
      • enter 10999 as the Tenant UID Range: High
      • press Add
      • add_local(bob) OK ... is displayed for all four nodes
      • click Back to tenant configuration
      • bob is now shown in the list of Existing Tenants
      • repeat as desired for additional tenants
      • click volumes in the entry for bob
      • testsda is shown in the Volume List
      • tick the Enabled checkbox for testsda
      • press Update
      • Volumes enabled for bob ... is displayed for all four nodes
      • click Back to tenant configuration
    7. start the volume(s)
      • press Done
      • select Manage Volumes
      • click start testsda entry in the list of Existing Volumes
      • start_local(testsda) returned 0 ... is displayed for all four nodes
  • mount the volume(s) on the client(s)
    • sudo hfs_mount 192.168.122.21 testsda bob carol /mnt/testsda