From Fedora Project Wiki

Line 1: Line 1:
== Introduction ==
== Introduction ==
=== 1.1 Purpose ===
=== Purpose ===


The intent of this document is to provide the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.
The intent of this document is to provide the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.
Line 8: Line 8:
{{admon/note|Note| This information has been gathered from a OpenStack lab project using the latest data available at the time of writing.}}
{{admon/note|Note| This information has been gathered from a OpenStack lab project using the latest data available at the time of writing.}}


=== 1.2 Assumptions ===
=== Assumptions ===


* Upstream OpenStack based on Folsom (2012.2) from EPEL6
* Upstream OpenStack based on Folsom (2012.2) from EPEL6
Line 16: Line 16:
* Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.
* Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.


=== 1.3 Conventions ===
=== Conventions ===
All the code examples or system output shown in this documentation use the following highlight:
All the code examples or system output shown in this documentation use the following highlight:


Line 27: Line 27:
The URL provided to be used must have the host replaced by the corresponding one for the targeted environment
The URL provided to be used must have the host replaced by the corresponding one for the targeted environment


=== 1.4 Definitions ===
=== Definitions ===


{|
{|

Revision as of 05:18, 6 March 2013

Introduction

Purpose

The intent of this document is to provide the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.

We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.

Note.png
Note
This information has been gathered from a OpenStack lab project using the latest data available at the time of writing.

Assumptions

  • Upstream OpenStack based on Folsom (2012.2) from EPEL6
  • The Operating System is Red Hat Enterprise Linux - RHEL6.4+. All machines (Virtual or Physical) have been provisioned with a base RHEL6 system and up to date.
  • The system management is based on Foreman 1.1 from the Foreman Yum Repo and Puppet 2.6.17 from the Extra Packages for Enterprise Linux 6 (EPEL6)/
  • Foreman provides full system provisioning, meanwhile this is not covered here, at least for now.
  • Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.

Conventions

All the code examples or system output shown in this documentation use the following highlight:

This is an output or system command example!

All the code examples unless specified otherwise, are to be run as root

The URL provided to be used must have the host replaced by the corresponding one for the targeted environment

Definitions

Name Description
Host Group Foreman definition grouping environment, Puppet Classes and variables

together to be inherited by hosts.

OpenStack Controller node Server with all OpenStack modules to manage OpenStack Compute nodes
OpenStack Compute node Server OpenStack Nova Compute and Nova Network modules providing OpenStack Cloud Instances
RHEL Core Base Operation System installed with standard RHEL packages and specific configuration required by all systems (or hosts)

2 Architecture

2.1 OpenStack Components

The idea is to have a Management system to be able to quick deploy OpenStack Controllers or OpenStack Compute nodes.

An Openstack Controller Server regroups the following OpenStack modules:

  • OpenStack Nova Keystone, the identity server
  • OpenStack Nova Glance, the image repository
  • OpenStack Nova Scheduler
  • OpenStack Nova Horizon, the dashboard
  • OpenStack Nova API
  • QPID the AMQP Messaging Broker
  • Mysql backend
  • An OpenStack-Compute

An OpenStack Compute consists of the following modules:

  • OpenStack Nova Compute
  • OpenStack Nova Network
  • OpenStack Nova API
  • Libvirt and dependant packages

2.2 Environment

The following environment has been tested to validate all the procedures described in this document:

  • Management System: both physical or virtual machine
  • OpenStack controller: physical machine
  • OpenStack compute nodes: several physical machines
  • Each physical machine has two NICs, respectively for the public and private networks. That is not required for the Management host.
Note.png
Please note
  • In a production environment we recommend a High Availability solution for the OpenStack Controllers
  • OpenStack modules could be used on virtual machines but we have not tested it yet.
  • One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.

2.3 High level work-flow

The idea is to achieve the OpenStack deployment in four steps:

  1. Deploy the system management solution Foreman
  2. Prepare Foreman for OpenStack
  3. Deploy the RHEL core definition with Puppet agent on participating OpenStack nodes
  4. Manage each OpenStack node to be either a Controller or a Compute node

3 RHEL Core: Common definitions

The Management server itself use the RHEL Core so we define them first.

In the rest of this documentation we assume that every system:

  • Is using the latest Red Hat Enterprise Linux version 6.x. We have tested with RHEL6.4.
  • Be registered and subscribed with an Red Hat account, either RHN Classic or RHSM. We have tested with RHSM.
  • Has been updated with latest packages
  • Has the been configured

3.1 IPV6

IPV6 is not required. Meanwhile we mention it here because for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.

3.2 Time

The NTP service is required and included during the deployment of OpenStack components.

Meanwhile for Puppet to work properly with SSL, all the physical machines must have their clock in sync.

Make sure all the hardware clocks are:

  • Using the same time zone
  • On time, less than 5 minutes delay from each others

3.3 Yum Repositories

Activate the following repositories:

  • RHEL6 Server Optional RPMS
  • EPEL6
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum-config-manager --enable rhel-6-server-optional-rpms
yum clean all

We need the Augeas utility for manipulating configuration files:

yum -y install augeas

3.4 SELinux

SELinux is a requirement for our projects, meanwhile at the time of writing, SELinux has not been fully validated for:

  • Foreman
  • OpenStack
Note.png
Note
if you plan do to the manual installation of the management server (further down) then you can skip this.

In the meantime activate SELinux in permissive mode:

setenforce 0

And make it persistent in /etc/selinux/config file:

SELINUX = permissive
SELINUXTYPE=targeted

3.5 FQDN

Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.

3.6 Puppet Agent

The puppet agent must be installed on every host and be configured in order to:

  • Point to the Puppet Master which is our Management server
  • Have Puppet plug-ins activated

The following commands make that happen:

PUPPETMASTER="puppet.example.org"
yum install -y puppet

# Set PuppetServer
augtool -s set /files/etc/puppet/puppet.conf/agent/server $PUPPETMASTER

# Puppet Plugins
augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true

Afterwards, the /etc/puppet/puppet.conf file should look like this:

[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet

# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet

# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl

pluginsync=true

[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt


# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig

server=puppet.example.org

4 Management Server

Let's get started with how to deploy Puppet-Foreman application in order to manage our OpenStack infrastructure.

Note.png
Please note
Foreman use Sqlite by default. Meanwhile it's recommended to use Mysql or Posgresql for production and/or large scale environments.

To use Mysql backend, follow the Mysql back section from the manual installation procedure described below. Postgresql integration is not covered.

We describe two installation methods for the Management application:

  • Automated

or

  • Manual

We recommend to use the automated approach.

Meanwhile, the manual approach walks you through the installation of the automated components. This should be helpful for some OpenStack scenarios and also for troubleshooting. The manual installation doesn't describe Apache/SSL/Passenger components yet.

4.1 Automated Installation

The Automated installation of the Management server provides:

  • Puppet Master
  • HTTPS service with Apache SSL and Passenger
  • Foreman Proxy (Smart-proxy) and Foreman
  • No SELinux

Before starting, make sure the Common Core definitions described earlier have been applied.

To get those services installed, configured and running, we basically use puppet itself with the following commands to be executed on the Management machine, host1.example.org for instance:

# Get some packages
yum install -y puppet git policycoreutils-python

# Get foreman-installer modules
git clone --recursive https://github.com/theforeman/foreman-installer.git \ /root/foreman-installer

# Install
puppet -v --modulepath=/root/foreman-installer -e "include puppet, \ puppet::server, passenger, foreman_proxy, foreman"
Note.png
Note
policycoreutils-python will be needed in the future for SELinux.

Foreman should then be accessible at https://host1.example.org.

You will be prompted to sign-in: use default user “admin” with the password “changeme”.

4.2 Manual Installation

The manual installation described here provides:

  • Puppet Master
  • HTTP service with Webrick
  • Foreman Proxy (Smart-proxy) and Foreman
  • SELinux

Before starting, make sure the Common Core definitions described earlier have been applied.

4.2.1 Puppet Master

Once the core components must have prepared, the we can install the Puppet master and Git. Git will be used to get the Puppet modules specific for OpenStack:

yum install -y git puppet-server policycoreutils-python
4.2.1.1 Initial Puppet Master configuration

We need to customise the Puppet Master configuration file /etc/puppet/puppet.conf.

First we activate puppet plugins (modules custom types & facts)

augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true

Then we add Puppet a default Production environment. You might want to extend it by adding other environments such as development, test, staging.

mkdir -p /etc/puppet/modules/production
mkdir /etc/puppet/modules/common
augtool -s set /files/etc/puppet/puppet.conf/production/modulepath \ /etc/puppet/modules/production:/etc/puppet/modules/common

The Puppet autosign feature allows to filter whose certificate requests will automatically be signed:

augtool -s set /files/etc/puppet/puppet.conf/master/autosign \$confdir/autosign.conf { mode = 664 }
4.2.1.2 SELinux

In order to have SELinux enforced on the Management host, we need to:

  • Set the SELinux type for /etc/puppet:
semanage fcontext -a -t puppet_etc_t '/etc/puppet(/.*)?'
  • Make sure the configuration files type gets applied when file are touched:
echo “/etc/puppet/*” >> /etc/selinux/restorecond.conf
  • Allow Puppet Master to use the Database:
setsebool -P puppetmaster_use_db true

4.2.2 Foreman Installation

Get Foreman packages from the yum repo:

yum install -y http://yum.theforeman.org/rc/el6/x86_64/foreman-release-1.1RC5-1.el6.noarch.rpm
yum install -y foreman foreman-proxy foreman-mysql foreman-mysql2 rubygem-redcarpet
4.2.2.1 External Node Classification

For Puppet ENC we rely on github.com/theforeman project and fetch the node.rb script from it:

git clone git://github.com/theforeman/puppet-foreman.git /tmp/puppet-foreman
cp /tmp/puppet-foreman/templates/external_node.rb.erb /etc/puppet/node.rb

We need to edit the variables defined at the head of the file, /etc/puppet/node.rb.

We are doing this using “sed” command in order to script it for later:

sed -i "s/<%= @foreman_url %>/http:\/\/$(hostname):3000/" \ /etc/puppet/node.rb
sed -i 's/<%= @puppet_home %>/\/var\/lib\/puppet/' /etc/puppet/node.rb
sed -i 's/<%= @facts %>/true/' /etc/puppet/node.rb
sed -i 's/<%= @storeconfigs %>/false/' /etc/puppet/node.rb
chmod 755 /etc/puppet/node.rb


Anyway the result should look like this (extract of the modified section):

SETTINGS = {
:url => "http://host1.example.org:3000",
:puppetdir => "/var/lib/puppet",
:facts => true,
:storeconfigs => true,
:timeout => 3,

Finally we tell Puppet Master to use ENC:

augtool -s set /files/etc/puppet/puppet.conf/master/external_nodes /etc/puppet/node.rb
augtool -s set /files/etc/puppet/puppet.conf/master/node_terminus exec

4.2.3 Foreman Reports

We use the foreman report form github.com/theforeman project downloaded earlier:

cp /tmp/puppet-foreman/templates/foreman-report.rb.erb \ /usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb
augtool -s set /files/etc/puppet/puppet.conf/master/reports foreman
4.2.3.1 Enable Foreman-proxy features
sed -i -r 's/(:puppetca:).*/\1 true/' /etc/foreman-proxy/settings.yml
sed -i -r 's/(:puppet:).*/\1 true/' /etc/foreman-proxy/settings.yml
4.2.3.2 Activate & run services
chkconfig foreman-proxy on
service foreman-proxy start
chkconfig foreman on
service foreman start

Foreman should be accessible at http://host1.example.org:3000.

Note.png
Note
The default user is “admin” and with the password “changeme”.

4.2.4 Optional Mysql Backend

Let's get the DBMS and active the service by default:

yum install -y mysql-server
chkconfig mysqld on
service mysqld start

Then we initialise the mysql database:

MYSQL_ADMIN_PASSWD='mysql'
/usr/bin/mysqladmin -u root password "${MYSQL_ADMIN_PASSWD}"
/usr/bin/mysqladmin -u root -h $(hostname) password "${MYSQL_ADMIN_PASSWD}"
4.2.4.1 Puppet database

We need to create a Puppet database and grant permission to it's user, “puppet”:

The following command will do that for us.

Note.png
Note
Change the MYSQL_PUPPET_PASSWD variable to assign the password of your choice.
Note.png
Note
The command will prompt for the MYSQL_ROOT_PASSWD we set-up earlier.
MYSQL_PUPPET_PASSWD='puppet'
echo "create database puppet; GRANT ALL PRIVILEGES ON puppet.* TO puppet@localhost IDENTIFIED BY '$MYSQL_PUPPET_PASSWD'; commit;" | mysql -u root -p

Finally we adjust the /etc/puppet/puppet.conf file for mysql.

Note.png
Note
We reuse here the MYSQL_PUPPET_PASSWD assigned before.
augtool -s set /files/etc/puppet/puppet.conf/master/storeconfigs true
augtool -s set /files/etc/puppet/puppet.conf/master/dbadapter mysql
augtool -s set /files/etc/puppet/puppet.conf/master/dbname puppet
augtool -s set /files/etc/puppet/puppet.conf/master/dbuser puppet
augtool -s set /files/etc/puppet/puppet.conf/master/dbpassword \ $MYSQL_PUPPET_PASSWD
augtool -s set /files/etc/puppet/puppet.conf/master/dbserver localhost
augtool -s set /files/etc/puppet/puppet.conf/master/dbsocket \ /var/lib/mysql/mysql.sock
4.2.4.2 Foreman database

First off we need the mysql gems for foreman:

yum -y install foreman-mysql*


We need to configure foreman to make good use of our Mysql Puppet database.

Modify the /etc/foreman/database.yml file so the production section looks like this:

production:
adapter: mysql2
database: puppet
username: puppet
password: puppet
host: localhost
socket: "/var/lib/mysql/mysql.sock"


And then foreman to populate the database:

cd /usr/share/foreman && RAILS_ENV=production rake db:migrate
4.2.4.3 Mysql Optimisation - Optional

This should be done only once puppet database has been created and populated.

Run the following create index command, you'll be prompted for the MYSQL_PUPPET_PASSWD password specified earlier:

echo “create index exported_restype_title on resources (exported, restype, title(50));” | mysql -u root -p -D puppet

4.3 Set-up Foreman

4.3.1 Smart-Proxy

Once Foreman, Foreman-proxy service are up and running, we need to link them together.

Firs, let's log into Foreman GUI: