How to Deploy Openstack on RHEL6 using Foreman

From FedoraProject

(Difference between revisions)
Jump to: navigation, search
m (Smart-Proxy)
m (SELinux)
 
(94 intermediate revisions by one user not shown)
Line 1: Line 1:
== Introduction ==
+
This Wiki provides the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.
=== Purpose ===
+
 
+
The intent of this document is to provide the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.
+
  
 
We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.
 
We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.
  
{{admon/note|Note| This information has been gathered from a OpenStack lab project using the latest data available at the time of writing.}}
+
{{admon/note|Note| This information has been gathered from real OpenStack lab tests using the latest data available at the time of writing.}}
 +
 
 +
 
 +
= Introduction =
  
=== Assumptions ===
+
== Assumptions ==
  
 
* Upstream OpenStack based on Folsom (2012.2) from EPEL6
 
* Upstream OpenStack based on Folsom (2012.2) from EPEL6
Line 21: Line 21:
 
}}
 
}}
  
=== Definitions ===
+
== Definitions ==
  
 
{|
 
{|
Line 36: Line 36:
 
|}
 
|}
  
== Architecture ==
+
= Architecture =
  
 
The idea is to have a Management system to be able to quickly deploy OpenStack Controllers or OpenStack Compute nodes.
 
The idea is to have a Management system to be able to quickly deploy OpenStack Controllers or OpenStack Compute nodes.
  
 
+
== OpenStack Components ==
 
+
=== OpenStack Components ===
+
  
 
An Openstack Controller Server regroups the following OpenStack modules:
 
An Openstack Controller Server regroups the following OpenStack modules:
Line 62: Line 60:
 
* Libvirt and dependant packages
 
* Libvirt and dependant packages
  
=== Environment ===
+
== Environment ==
  
 
The following environment has been tested to validate all the procedures described in this document:
 
The following environment has been tested to validate all the procedures described in this document:
Line 77: Line 75:
 
* One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.}}
 
* One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.}}
  
=== High level work-flow ===
+
== Workflow ==
  
 
The goal is to achieve the OpenStack deployment in four steps:
 
The goal is to achieve the OpenStack deployment in four steps:
Line 85: Line 83:
 
# Manage each OpenStack node to be either a Controller or a Compute node
 
# Manage each OpenStack node to be either a Controller or a Compute node
  
== RHEL Core: Common definitions ==
+
= RHEL Core: Common definitions =
  
 
The Management server itself is based upon the RHEL Core so we define it first.
 
The Management server itself is based upon the RHEL Core so we define it first.
Line 99: Line 97:
 
IPV6 is not required. Meanwhile for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.}}
 
IPV6 is not required. Meanwhile for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.}}
  
=== Time ===
+
== NTP ==
  
 
The NTP service is required and included during the deployment of OpenStack components.
 
The NTP service is required and included during the deployment of OpenStack components.
Line 108: Line 106:
  
 
* Using the same time zone
 
* Using the same time zone
* On time, less than 5 minutes delay from each others
+
* On time, with less than 5 minutes delay from each others
  
=== Yum Repositories ===
+
== Yum Repositories ==
  
 
Activate the following repositories:
 
Activate the following repositories:
  
 
* RHEL6 Server Optional RPMS
 
* RHEL6 Server Optional RPMS
 +
* PuppetLabs
 
* EPEL6
 
* EPEL6
  
 
<pre>
 
<pre>
 +
rpm -Uvh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm
 
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
 
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum-config-manager --enable rhel-6-server-optional-rpms
+
yum-config-manager --enable rhel-6-server-optional-rpms --enable epel --enable puppetlabs-products --enable puppetlabs-deps
yum clean all
+
yum clean expire-cache
 
</pre>
 
</pre>
  
Line 128: Line 128:
 
</pre>
 
</pre>
  
=== SELinux===
+
== SELinux==
  
SELinux is a requirement for our projects, meanwhile at the time of writing, SELinux has not been fully validated for:
+
At the time of writing, SELinux rules have not been fully validated for:
 
+
* Foreman using the automated installation
* Foreman
+
 
* OpenStack
 
* OpenStack
 +
 +
This is an ongoing work.
  
 
{{admon/note|Note| if you plan do to the manual installation of the management server (further down) then you can skip this.}}
 
{{admon/note|Note| if you plan do to the manual installation of the management server (further down) then you can skip this.}}
  
In the meantime activate SELinux in permissive mode:
+
So in the meantime, we need to activate SELinux in permissive mode:
 
<pre>
 
<pre>
 
setenforce 0
 
setenforce 0
Line 145: Line 146:
 
<pre>
 
<pre>
 
SELINUX = permissive
 
SELINUX = permissive
SELINUXTYPE=targeted
+
SELINUXTYPE = targeted
 
</pre>
 
</pre>
  
=== FQDN ===
+
== FQDN ==
  
 
Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.
 
Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.
  
=== Puppet Agent ===
+
== Puppet Agent ==
  
 
The puppet agent must be installed on every host and be configured in order to:
 
The puppet agent must be installed on every host and be configured in order to:
Line 206: Line 207:
 
</pre>
 
</pre>
  
== Installing the Management Server ==
+
= Automated Installation of Management Server =
  
Let's get started with how to deploy Puppet-Foreman application suite in order to manage our OpenStack infrastructure.
+
Let's get started with the automated deployment of Puppet-Foreman application suite in order to manage our OpenStack infrastructure.
  
We describe two installation methods for the Management application:
+
{{admon/note|Note|The manual installation method is described here:
* Automated Installation: This is the easiest and recommended approach.
+
[[How_to_Deploy_Puppet_Foreman_on_RHEL6_manually]]}}
* Manual Installation: Walks you through components deployments. Helpful for other OpenStack architecture scenarios and also for troubleshooting.
+
 
+
{{admon/note|Please note|
+
* The manual installation doesn't describe Apache/SSL/Passenger components yet.
+
* Foreman uses Sqlite by default. Meanwhile it's recommended to use Mysql or Posgresql for production and/or large scale environments.
+
** To use Mysql backend, follow the "Optional Mysql backend" section described further below.
+
** Postgresql integration is not covered here.
+
}}
+
 
+
=== Automated Installation ===
+
  
 
The Automated installation of the Management server provides:
 
The Automated installation of the Management server provides:
 
 
* Puppet Master
 
* Puppet Master
* HTTPS service with Apache SSL and Passenger
+
* HTTP service with Apache SSL and Passenger
 
* Foreman Proxy (Smart-proxy) and Foreman
 
* Foreman Proxy (Smart-proxy) and Foreman
 
* No SELinux
 
* No SELinux
Line 232: Line 222:
 
Before starting, make sure the "RHEL Core: Common definitions" described earlier have been applied.
 
Before starting, make sure the "RHEL Core: Common definitions" described earlier have been applied.
  
To get the management suite installed, configured and running, we use puppet itself.
+
In case there are several network interfaces on the Management machine, Foreman HTTPS service is to be activated on the desired interface by either:
 +
* Deactivate all interfaces but the desired one. Be careful not to cut yourself out!
 +
or
 +
* After the installation, replace with IP of choice the "<VirtualHost IP:80>" and "<VirtualHost IP:443>" records in the /etc/httpd/conf.d/foreman.conf file
  
The following commands to be executed on the Management machine:
+
Using Puppet itself onto the Management machine to install the Foreman suite:
  
 
<pre>
 
<pre>
 
# Get packages
 
# Get packages
yum install -y puppet git policycoreutils-python
+
yum install -y git  
  
 
# Get foreman-installer modules
 
# Get foreman-installer modules
Line 244: Line 237:
  
 
# Install
 
# Install
puppet -v --modulepath=/root/foreman-installer -e "include puppet, puppet::server, passenger, foreman_proxy, foreman"
+
puppet apply -v --modulepath=/root/foreman-installer -e "include puppet, puppet::server, passenger, foreman_proxy, foreman"
 
</pre>
 
</pre>
  
{{admon/note|Note|policycoreutils-python will be needed in the future for SELinux.}}
+
At this stage, Foreman should be accessible on your Management host on HTTPS: https://host1.example.org.
 +
You will be prompted to sign-on: use default user “admin” with the password “changeme”.
  
Foreman should then be accessible at https://host1.example.org.
+
Note: If you're using a Firewall you must open HTTPS(443) port to access the GUI.
  
You will be prompted to sign-in: use default user “admin” with the password “changeme”.
+
== Optional Database Backend ==
  
=== Manual Installation ===
+
Please refer to this page if you'd like to use mysql or postgresql as backend for Puppet and Foreman:
  
The manual installation provides:
+
[[How_to_Puppet_Foreman_Mysql_or_Postgresql_on_RHEL6]]
  
* Puppet Master
+
= Foreman Configuration =
* HTTP service with Webrick
+
* Foreman Proxy (Smart-proxy) and Foreman
+
* SELinux
+
  
Before starting, make sure the Common Core definitions described earlier have been applied.
+
For OpenStack deployments, we need Foreman to:
 +
* Setup smart-proxy (Foreman-proxy)
 +
* Define globals variables
 +
* Download Puppet Modules
 +
* Declare hostgroups 
  
==== Puppet Master ====
+
Those steps have been scripted using Foreman API.
  
Once the core components must have prepared, the we can install the Puppet master and Git. Git will be used to get the Puppet modules specific for OpenStack:
+
{{admon/note|Important|The script must be run (for now) from the Management server itself, as root}}
<pre>
+
yum install -y git puppet-server policycoreutils-python
+
</pre>
+
 
+
===== Initial Puppet Master configuration =====
+
 
+
We need to customise the Puppet Master configuration file /etc/puppet/puppet.conf.
+
  
First we activate puppet plugins (modules custom types & facts)
+
Download the script:
 
<pre>
 
<pre>
augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true
+
git clone https://github.com/gildub/foremanopenstack-setup /tmp/setup-foreman
 +
cd /tmp/setup-foreman
 
</pre>
 
</pre>
  
Then we add Puppet a default Production environment. You might want to extend it by adding other environments such as development, test, staging.
+
Then edit foreman-params.json file to and change the following to needs:
 +
* The host section with your foreman hostname and user/passwd if you have changed them
 +
* The proxy name can be anything you'd like, the host is the same as foreman FQDN name, with ssl and port 8443.
 +
* The globals are your OpenStack values, adjust all passwords, tokens, network ranges and NICs values to you needs.
  
<pre>
+
Notes: Be careful with JSON syntax - It doesn't like a colon at the wrong place!
mkdir -p /etc/puppet/modules/production
+
mkdir /etc/puppet/modules/common
+
augtool -s set /files/etc/puppet/puppet.conf/production/modulepath \ /etc/puppet/modules/production:/etc/puppet/modules/common
+
</pre>
+
  
The Puppet autosign feature allows to filter whose certificate requests will automatically be signed:
+
Lets launch the script to configure Foreman with our parameters:
 
<pre>
 
<pre>
augtool -s set /files/etc/puppet/puppet.conf/master/autosign \$confdir/autosign.conf { mode = 664 }
+
./foreman-setup.rb -l logfile all
 
</pre>
 
</pre>
===== SELinux =====
 
 
In order to have SELinux enforced on the Management host, we need to:
 
 
* Set the SELinux type for /etc/puppet:
 
<pre>
 
semanage fcontext -a -t puppet_etc_t '/etc/puppet(/.*)?'
 
</pre>
 
* Make sure the configuration files type gets applied when file are touched:
 
<pre>
 
echo “/etc/puppet/*” >> /etc/selinux/restorecond.conf
 
</pre>
 
* Allow Puppet Master to use the Database:
 
<pre>
 
setsebool -P puppetmaster_use_db true
 
</pre>
 
 
==== Foreman Installation ====
 
 
Get Foreman packages from the yum repo:
 
 
<pre>
 
yum install -y http://yum.theforeman.org/rc/el6/x86_64/foreman-release-1.1RC5-1.el6.noarch.rpm
 
yum install -y foreman foreman-proxy foreman-mysql foreman-mysql2 rubygem-redcarpet
 
</pre>
 
 
===== External Node Classification =====
 
 
For Puppet ENC we rely on github.com/theforeman project and fetch the node.rb script from it:
 
<pre>
 
git clone git://github.com/theforeman/puppet-foreman.git /tmp/puppet-foreman
 
cp /tmp/puppet-foreman/templates/external_node.rb.erb /etc/puppet/node.rb
 
</pre>
 
 
We need to edit the variables defined at the head of the file, /etc/puppet/node.rb.
 
 
We are doing this using “sed” command in order to script it for later:
 
<pre>
 
sed -i "s/<%= @foreman_url %>/http:\/\/$(hostname):3000/" \ /etc/puppet/node.rb
 
sed -i 's/<%= @puppet_home %>/\/var\/lib\/puppet/' /etc/puppet/node.rb
 
sed -i 's/<%= @facts %>/true/' /etc/puppet/node.rb
 
sed -i 's/<%= @storeconfigs %>/false/' /etc/puppet/node.rb
 
chmod 755 /etc/puppet/node.rb
 
</pre>
 
 
 
Anyway the result should look like this (extract of the modified section):
 
<pre>
 
SETTINGS = {
 
:url => "http://host1.example.org:3000",
 
:puppetdir => "/var/lib/puppet",
 
:facts => true,
 
:storeconfigs => true,
 
:timeout => 3,
 
</pre>
 
 
Finally we tell Puppet Master to use ENC:
 
<pre>
 
augtool -s set /files/etc/puppet/puppet.conf/master/external_nodes /etc/puppet/node.rb
 
augtool -s set /files/etc/puppet/puppet.conf/master/node_terminus exec
 
</pre>
 
==== Foreman Reports ====
 
 
We use the foreman report form github.com/theforeman project downloaded earlier:
 
<pre>
 
cp /tmp/puppet-foreman/templates/foreman-report.rb.erb \ /usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb
 
augtool -s set /files/etc/puppet/puppet.conf/master/reports foreman
 
</pre>
 
 
===== Enable Foreman-proxy features =====
 
<pre>
 
sed -i -r 's/(:puppetca:).*/\1 true/' /etc/foreman-proxy/settings.yml
 
sed -i -r 's/(:puppet:).*/\1 true/' /etc/foreman-proxy/settings.yml
 
</pre>
 
 
===== Activate & run services =====
 
<pre>
 
chkconfig foreman-proxy on
 
service foreman-proxy start
 
chkconfig foreman on
 
service foreman start
 
</pre>
 
 
Foreman should be accessible at http://host1.example.org:3000.
 
 
{{admon/note|Note|The default user is “admin” and with the password “changeme”.}}
 
 
=== Optional Mysql Backend ===
 
 
Let's get the DBMS and active the service by default:
 
<pre>
 
yum install -y mysql-server
 
chkconfig mysqld on
 
service mysqld start
 
</pre>
 
 
Then we initialise the mysql database:
 
<pre>
 
MYSQL_ADMIN_PASSWD='mysql'
 
/usr/bin/mysqladmin -u root password "${MYSQL_ADMIN_PASSWD}"
 
/usr/bin/mysqladmin -u root -h $(hostname) password "${MYSQL_ADMIN_PASSWD}"
 
</pre>
 
 
==== Puppet database ====
 
 
We need to create a Puppet database and grant permission to it's user, “puppet”:
 
 
The following command will do that for us.
 
 
{{admon/note|Note|Change the MYSQL_PUPPET_PASSWD variable to assign the password of your choice.}}
 
 
{{admon/note|Note|The command will prompt for the MYSQL_ROOT_PASSWD we set-up earlier.}}
 
 
<pre>
 
MYSQL_PUPPET_PASSWD='puppet'
 
echo "create database puppet; GRANT ALL PRIVILEGES ON puppet.* TO puppet@localhost IDENTIFIED BY '$MYSQL_PUPPET_PASSWD'; commit;" | mysql -u root -p
 
</pre>
 
 
Finally we adjust the /etc/puppet/puppet.conf file for mysql.
 
 
{{admon/note|Note|We reuse here the MYSQL_PUPPET_PASSWD assigned before.}}
 
 
<pre>
 
augtool -s set /files/etc/puppet/puppet.conf/master/storeconfigs true
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbadapter mysql
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbname puppet
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbuser puppet
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbpassword \ $MYSQL_PUPPET_PASSWD
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbserver localhost
 
augtool -s set /files/etc/puppet/puppet.conf/master/dbsocket \ /var/lib/mysql/mysql.sock
 
</pre>
 
 
==== Foreman database ====
 
 
First off we need the mysql gems for foreman:
 
<pre>
 
yum -y install foreman-mysql*
 
</pre>
 
 
 
We need to configure foreman to make good use of our Mysql Puppet database.
 
 
Modify the /etc/foreman/database.yml file so the production section looks like this:
 
<pre>
 
production:
 
adapter: mysql2
 
database: puppet
 
username: puppet
 
password: puppet
 
host: localhost
 
socket: "/var/lib/mysql/mysql.sock"
 
</pre>
 
 
 
And then foreman to populate the database:
 
 
<pre>
 
cd /usr/share/foreman && RAILS_ENV=production rake db:migrate
 
</pre>
 
 
==== Mysql Optimisation ====
 
 
For optimisation, the following which is optional, should be done only once puppet database has been created and populated.
 
 
Run the following create index command, you'll be prompted for the MYSQL_PUPPET_PASSWD password specified earlier:
 
 
<pre>
 
echo “create index exported_restype_title on resources (exported, restype, title(50));” | mysql -u root -p -D puppet
 
</pre>
 
 
== Set-up Foreman ==
 
=== Smart-Proxy ===
 
 
Once Foreman-proxy and Foreman services are up and running, we need to link them together.
 
 
First, let's log into Foreman GUI:
 
 
[[File:Foreman-login.png|400px]]
 
 
From the menu located on the top end right:
 
* Select “More -> Configuration -> Smart Proxies”
 
* Select the “New Proxy” button.
 
* Add the definitions for a new proxy:
 
** The name is only a description
 
** The URL:
 
*** Should match the FQDN of the management host
 
*** Use the smart-proxy port 8443 for SSL
 
** Select “Submit” button to validate.
 
 
{{admon/note|Note| SSL is configured by default for the automated installation but not in the manual installation.}}
 
 
For example:
 
 
[[File:Foreman-new-proxy.png|400px]]
 
 
=== Import OpenStack Puppet Modules ===
 
 
We need to download the Opentstack Puppet modules from the github project. All the OpenStack components are installed from those modules:
 
 
<pre>
 
git clone --recursive https://github.com/gildub/puppet-openstack.git /etc/puppet/modules/production
 
</pre>
 
 
 
Along with the nova-compute and nova-controller installer:
 
<pre>
 
git clone https://bitbucket.org/gildub/trystack.git
 
</pre>
 
 
We import the Puppet modules into Foreman using either:
 
 
* The GUI: Select “More -> Configuration -> Puppet classes” and click “Import from <your_smart_proxy>” button:
 
 
[[File:Foreman-import.png|400px]]
 
 
* Command line:
 
<pre>
 
cd /usr/share/foreman && rake puppet:import:puppet_classes RAILS_ENV=production
 
</pre>
 
 
{{admon/note|Note|To use with scripts, you can add the “batch” option to the rake import command:
 
<pre>rake puppet:import:puppet_classes[batch]</pre>
 
}}
 
 
=== Parameters ===
 
 
We must provide all the parameters required by the OpenStack puppet modules in order to configure the different components with those values.
 
Here is the list of all the parameters to be defined into Foreman:
 
 
{|
 
! Name !! Value
 
|-
 
| nova_db_password || changeme
 
|-
 
| verbose || true
 
|-
 
| mysql_root_password || changeme
 
|-
 
| keystone_db_password ||changeme
 
|-
 
|glance_db_password || changeme
 
|-
 
| nova_db_password || changeme
 
|-
 
| keystone_admin_token || secret
 
|-
 
| admin_email || admin@example.org
 
|-
 
| admin_password || changeme
 
|-
 
| glance_user_password || changeme
 
|-
 
| nova_user_password || changeme
 
|-
 
| glance_user_password || changeme
 
|-
 
| private_interface || em1*
 
|-
 
| public_interface || em2*
 
|-
 
| fixed_network_range || 10.100.10.0/24
 
|-
 
| floating_network_range || 8.21.28.128/25
 
|-
 
| horizon_secret_key || secret
 
|-
 
| controller_node_public || 10.100.0.2
 
|}
 
 
{{admon/note|*|Adjust those values according to your network configuration}}
 
 
Using Foreman GUI, go to “More -> Configuration -> Global Parameters” and “Add Parameter” in order to create all parameters described in the previous table:
 
 
[[File:Foreman-global-parameters.png|400px]]
 
 
=== Hosts Groups ===
 
Host Groups are an easy way to group Puppet class modules and parameters. A host, when attached to a Host Group automatically inherits those definitions.
 
We manage the two OpenStack types of server using Foreman Host Groups.
 
 
So, we need to create two Host Groups:
 
* OpenStack-Controller
 
* OpenStack Compute Nodes
 
 
To create a Host Group:
 
# Select the menu entry “More -> Configuration -> Host Groups”
 
# Provide:
 
#* The name
 
#* The environment: Production is the default
 
#* The smart-proxy: Use the one created previously
 
 
So we create the first Host Group, "openstack-controller" and validate by selecting the “Submit” button at the bottom of the page:
 
 
[[File:Foreman-new-hostgroup.png|400px]]
 
 
We repeat the same operation to create the second Host Group, "openstack-compute":
 
 
[[File:Foreman-openstack-hostgroups.png|400px]]
 
 
Finally, we need to associate the OpenStack Controller and the OpenStack Compute classes
 
respectively to the two Host Groups we have created.
 
 
=== OpenStack Controller ===
 
To define the OpenStack Controller Host Group,
 
Edit the OpenStack-Controller Host Group and use the “Puppet Classes” tab and select the "TryStack class".
 
Activate the trystack and trystack::controller classes by clicking on the "+" icon.
 
 
[[File:Foreman-openstack-controller.png|400px]]
 
  
=== OpenStack Compute ===
+
Check the log file to confirm there wasn't any error.
To define the OpenStack Compute Host Group,
+
Edit the openStack-compute Host Group and activate the trystack and trystack::compute classes:
+
  
[[File:Foreman-openstack-compute.png|400px]]
+
{{admon/note|Note|The above process can also be done manually using Foreman GUI, it's described here:
 +
[[How_to_Deploy_Openstack_Setup_Foreman_on_RHEL6_manually]]}}
  
== Manage a Host ==
+
= Manage Hosts =
  
 
To make a system part of our OpenStack infrastructure we have to:
 
To make a system part of our OpenStack infrastructure we have to:
Line 613: Line 293:
 
* Assign the host either the openstack-controller or openstack-compute Host Group
 
* Assign the host either the openstack-controller or openstack-compute Host Group
  
=== Register Host Certificates ===
+
== Register Host Certificates ==
==== Using Autosign ====
+
=== Using Autosign ===
 
With autosign option, the hosts can be automatically registered and visible from Foreman by
 
With autosign option, the hosts can be automatically registered and visible from Foreman by
 
adding the hostnames to the /etc/puppet/autosign.conf file.
 
adding the hostnames to the /etc/puppet/autosign.conf file.
==== Signing Certificates ====
+
=== Signing Certificates ===
 
If you're not using the autosign option then you will have to sign the host certificate, using either:
 
If you're not using the autosign option then you will have to sign the host certificate, using either:
 
* Foreman GUI
 
* Foreman GUI
Line 641: Line 321:
 
<pre>puppetca sign host3.example.org</pre>
 
<pre>puppetca sign host3.example.org</pre>
  
===  Assign a Host Group ===
+
==  Assign a Host Group ==
 
Display the hosts using the “Hosts” button at the top Foreman GUI screen.
 
Display the hosts using the “Hosts” button at the top Foreman GUI screen.
  
Line 653: Line 333:
 
Save by hitting the “Submit” button.
 
Save by hitting the “Submit” button.
  
=== Deploy OpenStack Components ===
+
== Deploy OpenStack Components ==
  
 
We are done!
 
We are done!

Latest revision as of 06:27, 26 April 2013

This Wiki provides the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.

We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.

Note.png
Note
This information has been gathered from real OpenStack lab tests using the latest data available at the time of writing.


Contents

[edit] Introduction

[edit] Assumptions

  • Upstream OpenStack based on Folsom (2012.2) from EPEL6
  • The Operating System is Red Hat Enterprise Linux - RHEL6.4+. All machines (Virtual or Physical) have been provisioned with a base RHEL6 system and up to date.
  • The system management is based on Foreman 1.1 from the Foreman Yum Repo and Puppet 2.6.17 from the Extra Packages for Enterprise Linux 6 (EPEL6)/
  • Foreman provides full system provisioning, meanwhile this is not covered here, at least for now.
  • Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.
Note.png
Conventions
  • All the code examples unless specified otherwise, are to be run as root
  • The URL provided must be replaced with corresponding host name of the targeted environment

[edit] Definitions

Name Description
Host Group Foreman definition grouping environment, Puppet Classes and variables

together to be inherited by hosts.

OpenStack Controller node Server with all OpenStack modules to manage OpenStack Compute nodes
OpenStack Compute node Server OpenStack Nova Compute and Nova Network modules providing OpenStack Cloud Instances
RHEL Core Base Operation System installed with standard RHEL packages and specific configuration required by all systems (or hosts)

[edit] Architecture

The idea is to have a Management system to be able to quickly deploy OpenStack Controllers or OpenStack Compute nodes.

[edit] OpenStack Components

An Openstack Controller Server regroups the following OpenStack modules:

  • OpenStack Nova Keystone, the identity server
  • OpenStack Nova Glance, the image repository
  • OpenStack Nova Scheduler
  • OpenStack Nova Horizon, the dashboard
  • OpenStack Nova API
  • QPID the AMQP Messaging Broker
  • Mysql backend
  • An OpenStack-Compute

An OpenStack Compute consists of the following modules:

  • OpenStack Nova Compute
  • OpenStack Nova Network
  • OpenStack Nova API
  • Libvirt and dependant packages

[edit] Environment

The following environment has been tested to validate all the procedures described in this document:

  • Management System: both physical or virtual machine
  • OpenStack controller: physical machine
  • OpenStack compute nodes: several physical machines
Note.png
Note
Each physical machine has two NICs, respectively for the public and private networks. That is not required for the Management host.
Important.png
This is important
  • In a production environment we recommend a High Availability solution for the OpenStack Controllers
  • OpenStack modules could be used on virtual machines but we have not tested it yet.
  • One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.

[edit] Workflow

The goal is to achieve the OpenStack deployment in four steps:

  1. Deploy the system management solution Foreman
  2. Prepare Foreman for OpenStack
  3. Deploy the RHEL core definition with Puppet agent on participating OpenStack nodes
  4. Manage each OpenStack node to be either a Controller or a Compute node

[edit] RHEL Core: Common definitions

The Management server itself is based upon the RHEL Core so we define it first.

In the rest of this documentation we assume that every system:

  • Is using the latest Red Hat Enterprise Linux version 6.x. We have tested with RHEL6.4.
  • Be registered and subscribed with an Red Hat account, either RHN Classic or RHSM. We have tested with RHSM.
  • Has been updated with latest packages
  • Has the been configured with the following definitions
Idea.png
This is a tip
IPV6 is not required. Meanwhile for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.

[edit] NTP

The NTP service is required and included during the deployment of OpenStack components.

Meanwhile for Puppet to work properly with SSL, all the physical machines must have their clock in sync.

Make sure all the hardware clocks are:

  • Using the same time zone
  • On time, with less than 5 minutes delay from each others

[edit] Yum Repositories

Activate the following repositories:

  • RHEL6 Server Optional RPMS
  • PuppetLabs
  • EPEL6
rpm -Uvh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum-config-manager --enable rhel-6-server-optional-rpms --enable epel --enable puppetlabs-products --enable puppetlabs-deps
yum clean expire-cache

We need the Augeas utility for manipulating configuration files:

yum -y install augeas

[edit] SELinux

At the time of writing, SELinux rules have not been fully validated for:

  • Foreman using the automated installation
  • OpenStack

This is an ongoing work.

Note.png
Note
if you plan do to the manual installation of the management server (further down) then you can skip this.

So in the meantime, we need to activate SELinux in permissive mode:

setenforce 0

And make it persistent in /etc/selinux/config file:

SELINUX = permissive
SELINUXTYPE = targeted

[edit] FQDN

Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.

[edit] Puppet Agent

The puppet agent must be installed on every host and be configured in order to:

  • Point to the Puppet Master which is our Management server
  • Have Puppet plug-ins activated

The following commands make that happen:

PUPPETMASTER="puppet.example.org"
yum install -y puppet

# Set PuppetServer
augtool -s set /files/etc/puppet/puppet.conf/agent/server $PUPPETMASTER

# Puppet Plugins
augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true

Afterwards, the /etc/puppet/puppet.conf file should look like this:

[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet

# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet

# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl

pluginsync=true

[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt

# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig

server=puppet.example.org

[edit] Automated Installation of Management Server

Let's get started with the automated deployment of Puppet-Foreman application suite in order to manage our OpenStack infrastructure.

Note.png
Note
The manual installation method is described here: How_to_Deploy_Puppet_Foreman_on_RHEL6_manually

The Automated installation of the Management server provides:

  • Puppet Master
  • HTTP service with Apache SSL and Passenger
  • Foreman Proxy (Smart-proxy) and Foreman
  • No SELinux

Before starting, make sure the "RHEL Core: Common definitions" described earlier have been applied.

In case there are several network interfaces on the Management machine, Foreman HTTPS service is to be activated on the desired interface by either:

  • Deactivate all interfaces but the desired one. Be careful not to cut yourself out!

or

  • After the installation, replace with IP of choice the "<VirtualHost IP:80>" and "<VirtualHost IP:443>" records in the /etc/httpd/conf.d/foreman.conf file

Using Puppet itself onto the Management machine to install the Foreman suite:

# Get packages
yum install -y git 

# Get foreman-installer modules
git clone --recursive https://github.com/theforeman/foreman-installer.git /root/foreman-installer

# Install
puppet apply -v --modulepath=/root/foreman-installer -e "include puppet, puppet::server, passenger, foreman_proxy, foreman"

At this stage, Foreman should be accessible on your Management host on HTTPS: https://host1.example.org. You will be prompted to sign-on: use default user “admin” with the password “changeme”.

Note: If you're using a Firewall you must open HTTPS(443) port to access the GUI.

[edit] Optional Database Backend

Please refer to this page if you'd like to use mysql or postgresql as backend for Puppet and Foreman:

How_to_Puppet_Foreman_Mysql_or_Postgresql_on_RHEL6

[edit] Foreman Configuration

For OpenStack deployments, we need Foreman to:

  • Setup smart-proxy (Foreman-proxy)
  • Define globals variables
  • Download Puppet Modules
  • Declare hostgroups

Those steps have been scripted using Foreman API.

Note.png
Important
The script must be run (for now) from the Management server itself, as root

Download the script:

git clone https://github.com/gildub/foremanopenstack-setup /tmp/setup-foreman
cd /tmp/setup-foreman

Then edit foreman-params.json file to and change the following to needs:

  • The host section with your foreman hostname and user/passwd if you have changed them
  • The proxy name can be anything you'd like, the host is the same as foreman FQDN name, with ssl and port 8443.
  • The globals are your OpenStack values, adjust all passwords, tokens, network ranges and NICs values to you needs.

Notes: Be careful with JSON syntax - It doesn't like a colon at the wrong place!

Lets launch the script to configure Foreman with our parameters:

./foreman-setup.rb -l logfile all 

Check the log file to confirm there wasn't any error.

Note.png
Note
The above process can also be done manually using Foreman GUI, it's described here: How_to_Deploy_Openstack_Setup_Foreman_on_RHEL6_manually

[edit] Manage Hosts

To make a system part of our OpenStack infrastructure we have to:

  • Make sure the host follows the Common Core definitions – See RHEL Core: Common definitions section above
  • Have the host's certificate signed so it's registered with the Management server
  • Assign the host either the openstack-controller or openstack-compute Host Group

[edit] Register Host Certificates

[edit] Using Autosign

With autosign option, the hosts can be automatically registered and visible from Foreman by adding the hostnames to the /etc/puppet/autosign.conf file.

[edit] Signing Certificates

If you're not using the autosign option then you will have to sign the host certificate, using either:

  • Foreman GUI

Get on the Smart Proxies window from the menu "More -> Configuration -> Smart Proxies". And select the "Certificates" from the drop-down button of the smart-proxy you created:

Foreman-proxies.png

From there you can manage all the hosts certificates and get them signed.

  • The Command Line Interface

Assuming the Puppet agent (puppetd) is running on the host, the host certificate would have been created on the Puppet Master and will be waiting to be signed: From the Puppet Master host, use the “puppetca” tool with the command “list” to see the waiting certificates, for example:

# puppetca list
"host3.example.org" (84:AE:80:D2:8C:F5:15:76:0A:1A:4C:19:A9:B6:C1:11)

To sign a certificate, use the “sign” command and provide the hostame, for example:

puppetca sign host3.example.org

[edit] Assign a Host Group

Display the hosts using the “Hosts” button at the top Foreman GUI screen.

Then select the corresponding “Edit Host” drop-down button on the right side of the targeted host.

Assign the right environment and attach the appropriate Host Group to that host in order to make it a Controller or a Compute node.

Foreman-host-hostgroup.png

Save by hitting the “Submit” button.

[edit] Deploy OpenStack Components

We are done!

The OpenStack components will be installed when the Puppet agent synchronises with the Management server. Effectively, the classes will be applied when the agent retrieves the catalog from the Master and runs it.

You can also manually trigger the agent to check with the puppetmaster, to do so deactivate the agent on the targeted controller node run:

service puppet stop

And run it manually:

puppet agent –verbose --no-daemonize