OpenShift Origin-F19

From FedoraProject

Revision as of 18:56, 24 May 2013 by Tdawson (Talk | contribs)

Jump to: navigation, search

Fedora 19 is when OpenShift Origin first became a feature.

This page is here to show how to setup OpenShift Origin on Fedora 19 using the packages in Fedora, as opposed to the packages published from upstream. These steps are written out to be done by hand. Yes, people can script and/or puppetize these steps. But these are written out so that people can see, and fine tune them.

Goal: By the end of this, you should have two machines. A broker machine, and one node machine. You should be able to create applications, that will be put on the node machine. You should be able to check the status of those applications. You should be able to point your web browser to the URL of those applications.

Note: There is no web console in Fedora 19. That will be in Fedora 20.

These instructions were created most from the following two places.

Contents

Initial Setup of Broker and Node Machines

ON BOTH BROKER AND NODE

# Start with a Fedora 19 minimal install
yum -y update
# avoid clock skew
yum -y install ntp
/bin/systemctl enable ntpd.service
/bin/systemctl start  ntpd.service

Setup and Configure Broker

export DOMAIN="example.com"
export BROKERIP="$(nm-tool | grep Address | grep -v HW | awk '{print $2}')"
export BROKERNAME="broker.example.com"

Broker: Bind DNS

yum -y install bind bind-utils

KEYFILE=/var/named/${DOMAIN}.key
cd /var/named/
dnssec-keygen -a HMAC-MD5 -b 512 -n USER -r /dev/urandom ${DOMAIN}
KEY="$(grep Key: K${DOMAIN}*.private | cut -d ' ' -f 2)"
cd -
rndc-confgen -a -r /dev/urandom
echo $KEY
restorecon -v /etc/rndc.* /etc/named.*
chown -v root:named /etc/rndc.key
chmod -v 640 /etc/rndc.key
echo "forwarders { 8.8.8.8; 8.8.4.4; } ;" >> /var/named/forwarders.conf
restorecon -v /var/named/forwarders.conf
chmod -v 755 /var/named/forwarders.conf
rm -rvf /var/named/dynamic
mkdir -vp /var/named/dynamic
cat <<EOF > /var/named/dynamic/${DOMAIN}.db
\$ORIGIN .
\$TTL 1	; 1 seconds (for testing only)
${DOMAIN} IN SOA ns1.${DOMAIN}. hostmaster.${DOMAIN}. (
                         2011112904 ; serial
                         60         ; refresh (1 minute)
                         15         ; retry (15 seconds)
                         1800       ; expire (30 minutes)
                         10         ; minimum (10 seconds)
                          )
                     NS ns1.${DOMAIN}.
                     MX 10 mail.${DOMAIN}.
\$ORIGIN ${DOMAIN}.
ns1	              A        127.0.0.1

EOF
cat <<EOF > /var/named/${DOMAIN}.key
key ${DOMAIN} {
  algorithm HMAC-MD5;
  secret "${KEY}";
};
EOF
cat /var/named/dynamic/${DOMAIN}.db
cat /var/named/${DOMAIN}.key

chown -Rv named:named /var/named
restorecon -rv /var/named
mv /etc/named.conf /etc/named.conf.openshift
cat <<EOF > /etc/named.conf
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
    listen-on port 53 { any; };
    directory "/var/named";
    dump-file "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query { any; };
    recursion yes;

    /* Path to ISC DLV key */
    bindkeys-file "/etc/named.iscdlv.key";

    // set forwarding to the next nearest server (from DHCP response
    forward only;
    include "forwarders.conf";
};

logging {
    channel default_debug {
        file "data/named.run";
        severity dynamic;
    };
};

// use the default rndc key
include "/etc/rndc.key";
 
controls {
    inet 127.0.0.1 port 953
    allow { 127.0.0.1; } keys { "rndc-key"; };
};

include "/etc/named.rfc1912.zones";

include "${DOMAIN}.key";

zone "${DOMAIN}" IN {
    type master;
    file "dynamic/${DOMAIN}.db";
    allow-update { key ${DOMAIN} ; } ;
};
EOF

cat /etc/named.conf
chown -v root:named /etc/named.conf
restorecon /etc/named.conf

firewall-cmd --add-service=dns
firewall-cmd --permanent --add-service=dns
firewall-cmd --list-all
/bin/systemctl enable named.service
/bin/systemctl start named.service
nsupdate -k ${KEYFILE}
> server 127.0.0.1
> update delete broker.example.com A
> update add **your broker full name ** 180 A **your broker ip address**
(example: update add broker.example.com 180 A 192.168.122.220 )
> send
> quit
ping broker.example.com
dig @127.0.0.1 broker.example.com

Configure the BROKER DHCP client and hostname

echo "prepend domain-name-servers **your broker ip address**;" >> /etc/dhcp/dhclient-eth0.conf echo "supersede host-name \"broker\";" >> /etc/dhcp/dhclient-eth0.conf echo "supersede domain-name \"example.com\";" >> /etc/dhcp/dhclient-eth0.conf

echo "broker.example.com" > /etc/hostname

Installing and configuring MongoDB

yum -y install mongodb-server

vi /etc/mongodb.conf

  1. Uncomment auth = true
  2. Add smallfiles = true

/usr/bin/systemctl enable mongod.service /usr/bin/systemctl status mongod.service /usr/bin/systemctl start mongod.service /usr/bin/systemctl status mongod.service

  1. Testing

mongo > show dbs > exit

Installing and configuring QPID

  1. Activemq on F19 isn't ready for production. When it is, we'll use that
  2. For now let's use QPID with mcollective.

yum install mcollective-qpid-plugin qpid-cpp-server firewall-cmd --add-port=5672/tcp firewall-cmd --permanent --add-port=5672/tcp firewall-cmd --list-all

/usr/bin/systemctl enable qpidd.service /usr/bin/systemctl start qpidd.service /usr/bin/systemctl status qpidd.service

Installing and configuring MCollective client (QPID)

yum -y install mcollective-client mv /etc/mcollective/client.cfg /etc/mcollective/client.cfg.orig

cat <<EOF > /etc/mcollective/client.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective loglevel = debug logfile = /var/log/mcollective-client.log

  1. Plugins

securityprovider = psk plugin.psk = unset connector = qpid plugin.qpid.host=broker.example.com plugin.qpid.secure=false plugin.qpid.timeout=5

  1. Facts

factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml EOF

Installing and configuring the broker application

  1. When mcollective was updated to 2.2.3 it created a conflict with one of our components.
  2. We are working on fixing the conflict, but until then, do the following.

yumdownloader openshift-origin-msg-common rpm -Uvh openshift-origin-msg-common-1.4.1-1.fc19.noarch.rpm --nodeps --force

yum -y install openshift-origin-broker openshift-origin-broker-util rubygem-openshift-origin-auth-remote-user rubygem-openshift-origin-msg-broker-mcollective rubygem-openshift-origin-dns-bind

sed -i -e "s/ServerName .*$/ServerName broker.example.com/" /etc/httpd/conf.d/000002_openshift_origin_broker_servername.conf cat /etc/httpd/conf.d/000002_openshift_origin_broker_servername.conf

/usr/bin/systemctl enable httpd.service /usr/bin/systemctl enable ntpd.service /usr/bin/systemctl enable sshd.service

firewall-cmd --add-service=ssh firewall-cmd --add-service=http firewall-cmd --add-service=https firewall-cmd --permanent --add-service=ssh firewall-cmd --permanent --add-service=http firewall-cmd --permanent --add-service=https firewall-cmd --list-all

openssl genrsa -out /etc/openshift/server_priv.pem 2048 openssl rsa -in /etc/openshift/server_priv.pem -pubout > /etc/openshift/server_pub.pem ssh-keygen -t rsa -b 2048 -f ~/.ssh/rsync_id_rsa cp -v ~/.ssh/rsync_id_rsa* /etc/openshift/

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_run_stickshift=on named_write_master_zones=on fixfiles -R rubygem-passenger restore fixfiles -R mod_passenger restore restorecon -rv /var/run restorecon -rv /usr/share/gems/gems/passenger-*

vi /etc/openshift/broker.conf

  1. Might not have to do anything

CLOUD_DOMAIN="example.com" VALID_GEAR_SIZES="small,medium"

Configuring the broker plugins and MongoDB user accounts

cp /usr/share/gems/gems/openshift-origin-auth-remote-user-*/conf/openshift-origin- auth-remote-user.conf.example /etc/openshift/plugins.d/openshift-origin-auth-remote-user.conf
cp /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf.example /etc/openshift/plugins.d/openshift-origin-msg-broker-mcollective.conf
cd /var/named/
KEY="$(grep Key: K${DOMAIN}*.private | cut -d ' ' -f 2)"
cat $KEYFILE
echo $KEY
cat <<EOF > /etc/openshift/plugins.d/openshift-origin-dns-bind.conf
BIND_SERVER="127.0.0.1"
BIND_PORT=53
BIND_KEYNAME="${DOMAIN}"
BIND_KEYVALUE="${KEY}"
BIND_ZONE="${DOMAIN}"
EOF
  1. pushd /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/ && make -f /usr/share/selinux/devel/Makefile ; popd
  2. semodule -i /usr/share/selinux/packages/rubygem-openshift-origin-dns-bind/dhcpnamedforward.pp

cp -v /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user-basic.conf.sample /var/www/openshift/broker/httpd/conf.d/openshift-origin-auth-remote-user.conf htpasswd -c -b -s /etc/openshift/htpasswd demo demopassword

  1. Don't forget your password. <demo password>

cat /etc/openshift/htpasswd

grep MONGO /etc/openshift/broker.conf mongo openshift_broker_dev --eval 'db.addUser("openshift", "mooo")'

  1. If you are going to change the username and/or password, change broker.conf

yum -y install rubygem-psych cd /var/www/openshift/broker

  1. This is being fixed, but for now do the following

vi Gemfile

  1. remove minitest version
  2. add gem 'psych'

gem install mongoid bundle --local

/usr/bin/systemctl enable openshift-broker.service /usr/bin/systemctl start httpd.service /usr/bin/systemctl start openshift-broker.service /usr/bin/systemctl status openshift-broker.service

curl -k -u demo:demopassword https://localhost/broker/rest/api

Setup and Configure Node

Initial setup/configure of the node host

  1. ON NODE

yum update yum -y install ntp /bin/systemctl enable ntpd.service /bin/systemctl start ntpd.service

  1. Find out the node ip address

nm-tool

  1. ON BROKER

domain=example.com keyfile=/var/named/${domain}.key

  1. Use the IP address from the node, found above

oo-register-dns -h node -d ${domain} -n 192.168.122.161 -k ${keyfile}

scp /etc/openshift/rsync_id_rsa.pub root@node.example.com:/root/.ssh/

  1. ON NODE

cat /root/.ssh/rsync_id_rsa.pub >> /root/.ssh/authorized_keys rm -f /root/.ssh/rsync_id_rsa.pub

  1. ON BROKER

ssh -i /root/.ssh/rsync_id_rsa root@node.example.com exit

  1. Find out the broker ip address

nm-tool

Configure the NODE DHCP client and hostname

  1. ON NODE

echo "prepend domain-name-servers **your broker ip address**;" >> /etc/dhcp/dhclient-eth0.conf echo "supersede host-name \"node\";" >> /etc/dhcp/dhclient-eth0.conf echo "supersede domain-name \"example.com\";" >> /etc/dhcp/dhclient-eth0.conf echo "node.example.com" > /etc/hostname

reboot

Setting up MCollective on the node host

  1. ON NODE

yum -y install openshift-origin-msg-node-mcollective mv /etc/mcollective/server.cfg /etc/mcollective/server.cfg.orig

cat <<EOF > /etc/mcollective/server.cfg topicprefix = /topic/ main_collective = mcollective collectives = mcollective libdir = /usr/libexec/mcollective logfile = /var/log/mcollective.log loglevel = debug daemonize = 1 direct_addressing = n

  1. Plugins

securityprovider = psk plugin.psk = unset connector = qpid plugin.qpid.host=broker.example.com plugin.qpid.secure=false plugin.qpid.timeout=5

  1. Facts

factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml EOF

/bin/systemctl enable mcollective.service /bin/systemctl start mcollective.service

  1. ON BROKER

mco ping

Setting up node packages on the node host

  1. ON NODE

yum -y install rubygem-openshift-origin-node rubygem-passenger-native openshift-origin-port-proxy openshift-origin-node-util yum -y install openshift-origin-cartridge-cron-1.4 openshift-origin-cartridge-diy-0.1

firewall-cmd --add-service=ssh firewall-cmd --add-service=http firewall-cmd --add-service=https firewall-cmd --permanent --add-service=ssh firewall-cmd --permanent --add-service=http firewall-cmd --permanent --add-service=https firewall-cmd --list-all

Configuring PAM namespace module, cgropus, and user quotas on the node host

  1. ON NODE
  2. PAM

sed -i -e 's|pam_selinux|pam_openshift|g' /etc/pam.d/sshd

for f in "runuser" "runuser-l" "sshd" "su" "system-auth-ac" do

 t="/etc/pam.d/$f"
 if ! grep -q "pam_namespace.so" "$t"
 then
   echo -e "session\t\trequired\tpam_namespace.so no_unmount_on_close" >> "$t"
 fi

done

  1. CGROUPS
  2. echo "mount {" >> /etc/cgconfig.conf
  3. echo " cpu = /cgroup/all;" >> /etc/cgconfig.conf
  4. echo " cpuacct = /cgroup/all;" >> /etc/cgconfig.conf
  5. echo " memory = /cgroup/all;" >> /etc/cgconfig.conf
  6. echo " freezer = /cgroup/all;" >> /etc/cgconfig.conf
  7. echo " net_cls = /cgroup/all;" >> /etc/cgconfig.conf
  8. echo "}" >> /etc/cgconfig.conf
  9. restorecon -v /etc/cgconfig.conf
  10. mkdir /cgroup
  11. restorecon -RFvv /cgroup

/bin/systemctl enable cgconfig.service /bin/systemctl enable cgred.service /usr/sbin/chkconfig openshift-cgroups on /bin/systemctl restart cgconfig.service /bin/systemctl restart cgred.service /usr/sbin/service openshift-cgroups restart

  1. DISK QUOTA
  2. Edit fstab and add usrquota to whichever filesystem
  3. has /var/lib/openshift on it

UUID=b9e21eae-4b8c-4936-9f5d-d10631ff535e / ext4 defaults,usrquota 1 1

  1. reboot or remount

mount -o remount / quotacheck -cmug /

Configuring SELinux and System Control on the node host

  1. ON NODE
  2. SELINUX

setsebool -P httpd_unified=on httpd_can_network_connect=on httpd_can_network_relay=on httpd_read_user_content=on httpd_enable_homedirs=on httpd_run_stickshift=on allow_polyinstantiation=on

restorecon -rv /var/run restorecon -rv /usr/sbin/mcollectived /var/log/mcollective.log /var/run/mcollectived.pid restorecon -rv /var/lib/openshift /etc/openshift/node.conf /etc/httpd/conf.d/openshift

  1. SYSTEM CONTROL SETTINGS

echo "# Added for OpenShift" >> /etc/sysctl.d/openshift.conf echo "kernel.sem = 250 32000 32 4096" >> /etc/sysctl.d/openshift.conf echo "net.ipv4.ip_local_port_range = 15000 35530" >> /etc/sysctl.d/openshift.conf echo "net.netfilter.nf_conntrack_max = 1048576" >> /etc/sysctl.d/openshift.conf sysctl -p /etc/sysctl.d/openshift.conf

Configuring SSH, Port Proxy, and Node on the node host

  1. ON NODE
  2. SSH

vi /etc/ssh/sshd_config > AcceptEnv GIT_SSH

perl -p -i -e "s/^#MaxSessions .*$/MaxSessions 40/" /etc/ssh/sshd_config perl -p -i -e "s/^#MaxStartups .*$/MaxStartups 40/" /etc/ssh/sshd_config

/bin/systemctl restart sshd.service

  1. PORT PROXY

firewall-cmd --add-port=35531-65535/tcp firewall-cmd --permanent --add-port=35531-65535/tcp firewall-cmd --list-all

/bin/systemctl enable openshift-port-proxy.service /bin/systemctl restart openshift-port-proxy.service

  1. NODE SETUP

/bin/systemctl enable openshift-gears.service

  1. Find node and broker IP address

nm-tool

vi /etc/openshift/node.conf > PUBLIC_HOSTNAME="node.example.com" > PUBLIC_IP="192.168.122.161" (Node IP Address) > BROKER_HOST="192.168.122.220" (Broker IP Address) > CLOUD_DOMAIN="example.com"

/etc/cron.minutely/openshift-facts

Reboot Node and test

  1. ON NODE

reboot

  1. ON BROKER (after node is back up)

mco ping curl -k -u demo:demo https://localhost/broker/rest/api

yum -y install rubygem-rhc LIBRA_SERVER=broker.example.com rhc setup