From Fedora Project Wiki

(Replaced content with "= OpenStack in EPEL = The OpenStack Folsom was retired from EPEL 6. Please visit [http://openstack.redhat.com/Quickstart RDO project] for running OpenStack on EL platforms.")
 
(31 intermediate revisions by one other user not shown)
Line 1: Line 1:
This Wiki provides the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution.
= OpenStack in EPEL =


We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.
The OpenStack Folsom was retired from EPEL 6.
 
Please visit [http://openstack.redhat.com/Quickstart RDO project] for running OpenStack on EL platforms.
{{admon/note|Note| This information has been gathered from real OpenStack lab tests using the latest data available at the time of writing.}}
 
 
= Introduction =
 
== Assumptions ==
 
* Upstream OpenStack based on Folsom (2012.2) from EPEL6
* The Operating System is Red Hat Enterprise Linux - RHEL6.4+. All machines (Virtual or Physical) have been provisioned with a base RHEL6 system and up to date.
* The system management is based on Foreman 1.1 from the Foreman Yum Repo and Puppet 2.6.17 from the Extra Packages for Enterprise Linux 6 (EPEL6)/
* Foreman provides full system provisioning, meanwhile this is not covered here, at least for now.
* Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.
 
{{admon/note|Conventions|
* All the code examples unless specified otherwise, are to be run as root
* The URL provided must be replaced with corresponding host name of the targeted environment
}}
 
== Definitions ==
 
{|
! Name !! Description
|-
| Host Group || Foreman definition grouping environment, Puppet Classes and variables
together to be inherited by hosts.
|-
| OpenStack Controller node || Server with all OpenStack modules to manage OpenStack Compute nodes
|-
| OpenStack Compute node || Server OpenStack Nova Compute and Nova Network modules providing OpenStack Cloud Instances
|-
| RHEL Core || Base Operation System installed with standard RHEL packages and specific configuration required by all systems (or hosts)
|}
 
= Architecture =
 
The idea is to have a Management system to be able to quickly deploy OpenStack Controllers or OpenStack Compute nodes.
 
== OpenStack Components ==
 
An Openstack Controller Server regroups the following OpenStack modules:
 
* OpenStack Nova Keystone, the identity server
* OpenStack Nova Glance, the image repository
* OpenStack Nova Scheduler
* OpenStack Nova Horizon, the dashboard
* OpenStack Nova API
* QPID the AMQP Messaging Broker
* Mysql backend
* An OpenStack-Compute
 
An OpenStack Compute consists of the following modules:
 
* OpenStack Nova Compute
* OpenStack Nova Network
* OpenStack Nova API
* Libvirt and dependant packages
 
== Environment ==
 
The following environment has been tested to validate all the procedures described in this document:
 
* Management System: both physical or virtual machine
* OpenStack controller: physical machine
* OpenStack compute nodes: several physical machines
 
{{admon/note|Note|Each physical machine has two NICs, respectively for the public and private networks. That is not required for the Management host.}}
 
{{admon/important|This is important|
* In a production environment we recommend a High Availability solution for the OpenStack Controllers
* OpenStack modules could be used on virtual machines but we have not tested it yet.
* One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.}}
 
== Workflow ==
 
The goal is to achieve the OpenStack deployment in four steps:
# Deploy the system management solution Foreman
# Prepare Foreman for OpenStack
# Deploy the RHEL core definition with Puppet agent on participating OpenStack nodes
# Manage each OpenStack node to be either a Controller or a Compute node
 
= RHEL Core: Common definitions =
 
The Management server itself is based upon the RHEL Core so we define it first.
 
In the rest of this documentation we assume that every system:
 
* Is using the latest Red Hat Enterprise Linux version 6.x. We have tested with RHEL6.4.
* Be registered and subscribed with an Red Hat account, either RHN Classic or RHSM. We have tested with RHSM.
* Has been updated with latest packages
* Has the been configured with the following definitions
 
{{admon/tip|This is a tip|
IPV6 is not required. Meanwhile for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.}}
 
== NTP ==
 
The NTP service is required and included during the deployment of OpenStack components.
 
Meanwhile for Puppet to work properly with SSL, all the physical machines must have their clock in sync.
 
Make sure all the hardware clocks are:
 
* Using the same time zone
* On time, with less than 5 minutes delay from each others
 
== Yum Repositories ==
 
Activate the following repositories:
 
* RHEL6 Server Optional RPMS
* PuppetLabs
* EPEL6
 
<pre>
rpm -Uvh http://yum.puppetlabs.com/el/6/products/i386/puppetlabs-release-6-7.noarch.rpm
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum-config-manager --enable rhel-6-server-optional-rpms --enable epel --enable puppetlabs-products --enable puppetlabs-deps
yum clean all
</pre>
 
We need the Augeas utility for manipulating configuration files:
<pre>
yum -y install augeas
</pre>
 
== SELinux==
 
At the time of writing, SELinux rules have not been fully validated for:
* Foreman using the automated installation
* OpenStack
 
This is an ongoing work.
 
{{admon/note|Note| if you plan do to the manual installation of the management server (further down) then you can skip this.}}
 
So in the meantime, we need to activate SELinux in permissive mode:
<pre>
setenforce 0
</pre>
 
And make it persistent in /etc/selinux/config file:
<pre>
SELINUX = permissive
SELINUXTYPE=targeted
</pre>
 
== FQDN ==
 
Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.
 
== Puppet Agent ==
 
The puppet agent must be installed on every host and be configured in order to:
 
* Point to the Puppet Master which is our Management server
* Have Puppet plug-ins activated
 
The following commands make that happen:
 
<pre>
PUPPETMASTER="puppet.example.org"
yum install -y puppet
 
# Set PuppetServer
augtool -s set /files/etc/puppet/puppet.conf/agent/server $PUPPETMASTER
 
# Puppet Plugins
augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true
</pre>
 
Afterwards, the /etc/puppet/puppet.conf file should look like this:
 
<pre>
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
 
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
 
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
 
pluginsync=true
 
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
 
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
 
server=puppet.example.org
</pre>
 
= Automated Installation of Management Server =
 
Let's get started with the automated deployment of Puppet-Foreman application suite in order to manage our OpenStack infrastructure.
 
{{admon/note|Note|The manual installation method is described here:
[[How_to_Deploy_Puppet_Foreman_on_RHEL6_manually]]}}
 
The Automated installation of the Management server provides:
* Puppet Master
* HTTPS service with Apache SSL and Passenger
* Foreman Proxy (Smart-proxy) and Foreman
* No SELinux
 
Before starting, make sure the "RHEL Core: Common definitions" described earlier have been applied.
 
{{admon/important|Also if you have several interfaces activated.
You can either:
* Deactivate all interfaces but the one you'd like foreman https server to be running from.
Or
* After the installation, replace the IP address in the "<VirtualHost IP:80>" and "<VirtualHost IP:443>" records in /etc/httpd/conf.d/foreman.conf file in order to use your IP address of choice.
 
Otherwise you might end-up to have Foreman attached to the wrong one and have to re-install the SSL certificates.
 
Note: There must be a Puppet parameter for that but haven't searched for it yet!}}
 
To get the management suite installed, configured and running, we use puppet itself.
 
The following commands to be executed on the Management machine:
 
<pre>
# Get packages
yum install -y git
 
# Get foreman-installer modules
git clone --recursive https://github.com/theforeman/foreman-installer.git /root/foreman-installer
 
# Install
puppet apply -v --modulepath=/root/foreman-installer -e "include puppet, puppet::server, passenger, foreman_proxy, foreman"
</pre>
 
{{admon/note|Note|If you'd like to try and troubleshoot SELinux, install policycoreutils-python
<pre>
yum install -y policycoreutils-python
</pre>
}}
 
Foreman should then be accessible at https://host1.example.org.
 
You will be prompted to sign-in: use default user “admin” with the password “changeme”.
 
== Optional Database Backend ==
 
Please refer to this page if you'd like to use mysql or postgresql as backend for Puppet and Foreman:
 
[[How_to_Puppet_Foreman_Mysql_or_Postgresql_on_RHEL6]]
 
= Foreman Configuration =
 
Foreman needs to be configured for OpenStack deployment.
 
We need to:
* Setup smart-proxy (Foreman-proxy)
* Define globals variables
* Download Puppet Modules
* Declare hostgroups 
 
{{admon/note|Note|The following process could also be done using Foreman GUI which is described here:
[[How_to_Deploy_Openstack_Setup_Foreman_on_RHEL6_manually]]}}
 
This configuration process has be scripted using Foreman-API.
The script is available here: https://github.com/gildub/foremanopenstack-setup
 
== Smart-Proxy ==
 
Once Foreman-proxy and Foreman services are up and running, we need to link them together.
 
<pre>
foreman-setup proxy
</pre>
 
== OpenStack Puppet Modules ==
 
We need to download and import the Puppet modules for deploying and configuring Opentstack components.
The modules are sourced from the github project.
 
All OpenStack components are available from those modules:
<pre>
git clone --recursive https://github.com/gildub/puppet-openstack.git /etc/puppet/modules/production
</pre>
 
The nova-compute, nova-controller and other (to be described soon) installers:
<pre>
git clone https://bitbucket.org/gildub/trystack.git /etc/puppet/modules/production/trystack
</pre>
 
We need to import the Puppet modules into Foreman.
 
The import can be done from CLI or either Foreman GUI:
* Command line:
<pre>
cd /usr/share/foreman && rake puppet:import:puppet_classes RAILS_ENV=production
</pre>
 
{{admon/note|Note|To use with scripts, you can add the “batch” option to the rake import command:
<pre>rake puppet:import:puppet_classes[batch]</pre>
}}
 
 
* The GUI: Select “More -> Configuration -> Puppet classes” and click “Import from <your_smart_proxy>” button:
 
[[File:Foreman-import.png|400px]]
 
== Parameters ==
 
We provide all the parameters required by the OpenStack puppet modules in order to configure the different components with those values.
 
<pre>
foreman-setup globals
</pre>
 
== Hosts Groups ==
Host Groups are an easy way to group Puppet class modules and parameters. A host, when attached to a Host Group automatically inherits those definitions.
We manage the two OpenStack types of server using Foreman Host Groups.
 
So, we need to create two Host Groups:
* OpenStack-Controller
* OpenStack Compute Nodes
 
<pre>
foreman-setup hostgroups
</pre>
 
= Manage Hosts =
 
To make a system part of our OpenStack infrastructure we have to:
* Make sure the host follows the Common Core definitions – See RHEL Core: Common definitions section above
* Have the host's certificate signed so it's registered with the Management server
* Assign the host either the openstack-controller or openstack-compute Host Group
 
== Register Host Certificates ==
=== Using Autosign ===
With autosign option, the hosts can be automatically registered and visible from Foreman by
adding the hostnames to the /etc/puppet/autosign.conf file.
=== Signing Certificates ===
If you're not using the autosign option then you will have to sign the host certificate, using either:
* Foreman GUI
Get on the Smart Proxies window from the menu "More -> Configuration -> Smart Proxies".
And select the "Certificates" from the drop-down button of the smart-proxy you created:
 
[[File:Foreman-proxies.png|400px]]
 
From there you can manage all the hosts certificates and get them signed.
 
* The Command Line Interface
Assuming the Puppet agent (puppetd) is running on the host, the host certificate would have
been created on the Puppet Master and will be waiting to be signed:
From the Puppet Master host, use the “puppetca” tool with the command “list” to see the waiting
certificates, for example:
 
<pre>
# puppetca list
"host3.example.org" (84:AE:80:D2:8C:F5:15:76:0A:1A:4C:19:A9:B6:C1:11)
</pre>
 
To sign a certificate, use the “sign” command and provide the hostame, for example:
<pre>puppetca sign host3.example.org</pre>
 
==  Assign a Host Group ==
Display the hosts using the “Hosts” button at the top Foreman GUI screen.
 
Then select the corresponding “Edit Host” drop-down button on the right side of the targeted host.
 
Assign the right environment and attach the appropriate Host Group to that host in order to make
it a Controller or a Compute node.
 
[[File:Foreman-host-hostgroup.png|400px]]
 
Save by hitting the “Submit” button.
 
== Deploy OpenStack Components ==
 
We are done!
 
The OpenStack components will be installed when the Puppet agent synchronises with the
Management server. Effectively, the classes will be applied when the agent retrieves the catalog
from the Master and runs it.
 
You can also manually trigger the agent to check with the puppetmaster, to do so deactivate the agent on the targeted controller node run:
<pre>service puppet stop</pre>
 
And run it manually:
<pre>puppet agent –verbose --no-daemonize</pre>

Latest revision as of 10:24, 5 August 2014

OpenStack in EPEL

The OpenStack Folsom was retired from EPEL 6. Please visit RDO project for running OpenStack on EL platforms.