|
|
(100 intermediate revisions by one other user not shown) |
Line 1: |
Line 1: |
| == Introduction == | | = OpenStack in EPEL = |
| === Purpose ===
| |
|
| |
|
| The intent of this document is to provide the Open Source and Red Hat communities with a guide to deploy OpenStack infrastructures using Puppet/Foreman system management solution. | | The OpenStack Folsom was retired from EPEL 6. |
| | | Please visit [http://openstack.redhat.com/Quickstart RDO project] for running OpenStack on EL platforms. |
| We are describing how to deploy and provision the management system itself and how to use it to deploy OpenStack Controller and OpenStack Compute nodes.
| |
| | |
| {{admon/note|Note| This information has been gathered from a OpenStack lab project using the latest data available at the time of writing.}}
| |
| | |
| === Assumptions ===
| |
| | |
| * Upstream OpenStack based on Folsom (2012.2) from EPEL6
| |
| * The Operating System is Red Hat Enterprise Linux - RHEL6.4+. All machines (Virtual or Physical) have been provisioned with a base RHEL6 system and up to date.
| |
| * The system management is based on Foreman 1.1 from the Foreman Yum Repo and Puppet 2.6.17 from the Extra Packages for Enterprise Linux 6 (EPEL6)/
| |
| * Foreman provides full system provisioning, meanwhile this is not covered here, at least for now.
| |
| * Foreman Smart-proxy runs on the same host as Foreman. Please adjust accordingly if running on a separate host.
| |
| | |
| {{admon/note|Conventions|
| |
| * All the code examples unless specified otherwise, are to be run as root
| |
| * The URL provided must be replaced with corresponding host name of the targeted environment
| |
| }}
| |
| | |
| === Definitions ===
| |
| | |
| {|
| |
| ! Name !! Description
| |
| |-
| |
| | Host Group || Foreman definition grouping environment, Puppet Classes and variables
| |
| together to be inherited by hosts.
| |
| |-
| |
| | OpenStack Controller node || Server with all OpenStack modules to manage OpenStack Compute nodes
| |
| |-
| |
| | OpenStack Compute node || Server OpenStack Nova Compute and Nova Network modules providing OpenStack Cloud Instances
| |
| |-
| |
| | RHEL Core || Base Operation System installed with standard RHEL packages and specific configuration required by all systems (or hosts)
| |
| |}
| |
| | |
| == Architecture ==
| |
| | |
| The idea is to have a Management system to be able to quickly deploy OpenStack Controllers or OpenStack Compute nodes.
| |
| | |
| | |
| | |
| === OpenStack Components ===
| |
| | |
| An Openstack Controller Server regroups the following OpenStack modules:
| |
| | |
| * OpenStack Nova Keystone, the identity server
| |
| * OpenStack Nova Glance, the image repository
| |
| * OpenStack Nova Scheduler
| |
| * OpenStack Nova Horizon, the dashboard
| |
| * OpenStack Nova API
| |
| * QPID the AMQP Messaging Broker
| |
| * Mysql backend
| |
| * An OpenStack-Compute
| |
| | |
| An OpenStack Compute consists of the following modules:
| |
| | |
| * OpenStack Nova Compute
| |
| * OpenStack Nova Network
| |
| * OpenStack Nova API
| |
| * Libvirt and dependant packages
| |
| | |
| === Environment ===
| |
| | |
| The following environment has been tested to validate all the procedures described in this document:
| |
| | |
| * Management System: both physical or virtual machine
| |
| * OpenStack controller: physical machine
| |
| * OpenStack compute nodes: several physical machines
| |
| | |
| {{admon/note|Note|Each physical machine has two NICs, respectively for the public and private networks. That is not required for the Management host.}}
| |
| | |
| {{admon/important|This is important|
| |
| * In a production environment we recommend a High Availability solution for the OpenStack Controllers
| |
| * OpenStack modules could be used on virtual machines but we have not tested it yet.
| |
| * One NIC per physical machine with simulated interfaces (VLANs or alias) should work but has not been tested.}}
| |
| | |
| === High level work-flow ===
| |
| | |
| The goal is to achieve the OpenStack deployment in four steps:
| |
| # Deploy the system management solution Foreman
| |
| # Prepare Foreman for OpenStack
| |
| # Deploy the RHEL core definition with Puppet agent on participating OpenStack nodes
| |
| # Manage each OpenStack node to be either a Controller or a Compute node
| |
| | |
| == RHEL Core: Common definitions ==
| |
| | |
| The Management server itself is based upon the RHEL Core so we define it first.
| |
| | |
| In the rest of this documentation we assume that every system:
| |
| | |
| * Is using the latest Red Hat Enterprise Linux version 6.x. We have tested with RHEL6.4.
| |
| * Be registered and subscribed with an Red Hat account, either RHN Classic or RHSM. We have tested with RHSM.
| |
| * Has been updated with latest packages
| |
| * Has the been configured with the following definitions
| |
| | |
| {{admon/tip|This is a tip|
| |
| IPV6 is not required. Meanwhile for kernel dependencies and performance reasons we recommend to not deactivate the IPV6 module unless you know what you're doing.}}
| |
| | |
| === Time ===
| |
| | |
| The NTP service is required and included during the deployment of OpenStack components.
| |
| | |
| Meanwhile for Puppet to work properly with SSL, all the physical machines must have their clock in sync.
| |
| | |
| Make sure all the hardware clocks are:
| |
| | |
| * Using the same time zone
| |
| * On time, less than 5 minutes delay from each others
| |
| | |
| === Yum Repositories ===
| |
| | |
| Activate the following repositories:
| |
| | |
| * RHEL6 Server Optional RPMS
| |
| * EPEL6
| |
| | |
| <pre>
| |
| rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
| |
| yum-config-manager --enable rhel-6-server-optional-rpms
| |
| yum clean all
| |
| </pre>
| |
| | |
| We need the Augeas utility for manipulating configuration files:
| |
| <pre>
| |
| yum -y install augeas
| |
| </pre>
| |
| | |
| === SELinux===
| |
| | |
| SELinux is a requirement for our projects, meanwhile at the time of writing, SELinux has not been fully validated for:
| |
| | |
| * Foreman
| |
| * OpenStack
| |
| | |
| {{admon/note|Note| if you plan do to the manual installation of the management server (further down) then you can skip this.}}
| |
| | |
| In the meantime activate SELinux in permissive mode:
| |
| <pre>
| |
| setenforce 0
| |
| </pre>
| |
| | |
| And make it persistent in /etc/selinux/config file:
| |
| <pre>
| |
| SELINUX = permissive
| |
| SELINUXTYPE=targeted
| |
| </pre>
| |
| | |
| === FQDN ===
| |
| | |
| Make sure every host can resolve the Fully Qualified Domain Name of the management server is defined in available DNS or alternatively use the /etc/hosts file.
| |
| | |
| === Puppet Agent ===
| |
| | |
| The puppet agent must be installed on every host and be configured in order to:
| |
| | |
| * Point to the Puppet Master which is our Management server
| |
| * Have Puppet plug-ins activated
| |
| | |
| The following commands make that happen:
| |
| | |
| <pre>
| |
| PUPPETMASTER="puppet.example.org"
| |
| yum install -y puppet
| |
| | |
| # Set PuppetServer
| |
| augtool -s set /files/etc/puppet/puppet.conf/agent/server $PUPPETMASTER
| |
| | |
| # Puppet Plugins
| |
| augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true
| |
| </pre>
| |
| | |
| Afterwards, the /etc/puppet/puppet.conf file should look like this:
| |
| | |
| <pre>
| |
| [main] | |
| # The Puppet log directory.
| |
| # The default value is '$vardir/log'.
| |
| logdir = /var/log/puppet
| |
| | |
| # Where Puppet PID files are kept.
| |
| # The default value is '$vardir/run'.
| |
| rundir = /var/run/puppet
| |
| | |
| # Where SSL certificates are kept.
| |
| # The default value is '$confdir/ssl'.
| |
| ssldir = $vardir/ssl
| |
| | |
| pluginsync=true
| |
| | |
| [agent]
| |
| # The file in which puppetd stores a list of the classes
| |
| # associated with the retrieved configuratiion. Can be loaded in
| |
| # the separate ``puppet`` executable using the ``--loadclasses``
| |
| # option.
| |
| # The default value is '$confdir/classes.txt'.
| |
| classfile = $vardir/classes.txt
| |
| | |
| # Where puppetd caches the local configuration. An
| |
| # extension indicating the cache format is added automatically.
| |
| # The default value is '$confdir/localconfig'.
| |
| localconfig = $vardir/localconfig
| |
| | |
| server=puppet.example.org
| |
| </pre>
| |
| | |
| == Installing the Management Server ==
| |
| | |
| Let's get started with how to deploy Puppet-Foreman application suite in order to manage our OpenStack infrastructure.
| |
| | |
| We describe two installation methods for the Management application:
| |
| * Automated Installation: This is the easiest and recommended approach.
| |
| * Manual Installation: Walks you through components deployments. Helpful for other OpenStack architecture scenarios and also for troubleshooting.
| |
| | |
| {{admon/note|Please note|
| |
| * The manual installation doesn't describe Apache/SSL/Passenger components yet.
| |
| * Foreman uses Sqlite by default. Meanwhile it's recommended to use Mysql or Posgresql for production and/or large scale environments.
| |
| ** To use Mysql backend, follow the "Optional Mysql backend" section described further below.
| |
| ** Postgresql integration is not covered here.
| |
| }}
| |
| | |
| === Automated Installation ===
| |
| | |
| The Automated installation of the Management server provides:
| |
| | |
| * Puppet Master
| |
| * HTTPS service with Apache SSL and Passenger
| |
| * Foreman Proxy (Smart-proxy) and Foreman
| |
| * No SELinux
| |
| | |
| Before starting, make sure the "RHEL Core: Common definitions" described earlier have been applied.
| |
| | |
| To get the management suite installed, configured and running, we use puppet itself.
| |
| | |
| The following commands to be executed on the Management machine:
| |
| | |
| <pre>
| |
| # Get packages
| |
| yum install -y puppet git policycoreutils-python
| |
| | |
| # Get foreman-installer modules
| |
| git clone --recursive https://github.com/theforeman/foreman-installer.git /root/foreman-installer
| |
| | |
| # Install
| |
| puppet -v --modulepath=/root/foreman-installer -e "include puppet, puppet::server, passenger, foreman_proxy, foreman"
| |
| </pre>
| |
| | |
| {{admon/note|Note|policycoreutils-python will be needed in the future for SELinux.}}
| |
| | |
| Foreman should then be accessible at https://host1.example.org.
| |
| | |
| You will be prompted to sign-in: use default user “admin” with the password “changeme”.
| |
| | |
| === Manual Installation ===
| |
| | |
| The manual installation provides:
| |
| | |
| * Puppet Master
| |
| * HTTP service with Webrick
| |
| * Foreman Proxy (Smart-proxy) and Foreman
| |
| * SELinux
| |
| | |
| Before starting, make sure the Common Core definitions described earlier have been applied.
| |
| | |
| ==== Puppet Master ====
| |
| | |
| Once the core components must have prepared, the we can install the Puppet master and Git. Git will be used to get the Puppet modules specific for OpenStack:
| |
| <pre>
| |
| yum install -y git puppet-server policycoreutils-python
| |
| </pre>
| |
| | |
| ===== Initial Puppet Master configuration =====
| |
| | |
| We need to customise the Puppet Master configuration file /etc/puppet/puppet.conf.
| |
| | |
| First we activate puppet plugins (modules custom types & facts)
| |
| <pre>
| |
| augtool -s set /files/etc/puppet/puppet.conf/main/pluginsync true
| |
| </pre>
| |
| | |
| Then we add Puppet a default Production environment. You might want to extend it by adding other environments such as development, test, staging.
| |
| | |
| <pre>
| |
| mkdir -p /etc/puppet/modules/production
| |
| mkdir /etc/puppet/modules/common
| |
| augtool -s set /files/etc/puppet/puppet.conf/production/modulepath \ /etc/puppet/modules/production:/etc/puppet/modules/common
| |
| </pre>
| |
| | |
| The Puppet autosign feature allows to filter whose certificate requests will automatically be signed:
| |
| <pre>
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/autosign \$confdir/autosign.conf { mode = 664 }
| |
| </pre>
| |
| ===== SELinux =====
| |
| | |
| In order to have SELinux enforced on the Management host, we need to:
| |
| | |
| * Set the SELinux type for /etc/puppet:
| |
| <pre>
| |
| semanage fcontext -a -t puppet_etc_t '/etc/puppet(/.*)?'
| |
| </pre>
| |
| * Make sure the configuration files type gets applied when file are touched:
| |
| <pre>
| |
| echo “/etc/puppet/*” >> /etc/selinux/restorecond.conf
| |
| </pre>
| |
| * Allow Puppet Master to use the Database:
| |
| <pre>
| |
| setsebool -P puppetmaster_use_db true
| |
| </pre>
| |
| | |
| ==== Foreman Installation ====
| |
| | |
| Get Foreman packages from the yum repo:
| |
| | |
| <pre>
| |
| yum install -y http://yum.theforeman.org/rc/el6/x86_64/foreman-release-1.1RC5-1.el6.noarch.rpm
| |
| yum install -y foreman foreman-proxy foreman-mysql foreman-mysql2 rubygem-redcarpet
| |
| </pre>
| |
| | |
| ===== External Node Classification =====
| |
| | |
| For Puppet ENC we rely on github.com/theforeman project and fetch the node.rb script from it:
| |
| <pre>
| |
| git clone git://github.com/theforeman/puppet-foreman.git /tmp/puppet-foreman
| |
| cp /tmp/puppet-foreman/templates/external_node.rb.erb /etc/puppet/node.rb
| |
| </pre>
| |
| | |
| We need to edit the variables defined at the head of the file, /etc/puppet/node.rb.
| |
| | |
| We are doing this using “sed” command in order to script it for later:
| |
| <pre>
| |
| sed -i "s/<%= @foreman_url %>/http:\/\/$(hostname):3000/" \ /etc/puppet/node.rb
| |
| sed -i 's/<%= @puppet_home %>/\/var\/lib\/puppet/' /etc/puppet/node.rb
| |
| sed -i 's/<%= @facts %>/true/' /etc/puppet/node.rb
| |
| sed -i 's/<%= @storeconfigs %>/false/' /etc/puppet/node.rb
| |
| chmod 755 /etc/puppet/node.rb
| |
| </pre>
| |
| | |
| | |
| Anyway the result should look like this (extract of the modified section):
| |
| <pre>
| |
| SETTINGS = {
| |
| :url => "http://host1.example.org:3000",
| |
| :puppetdir => "/var/lib/puppet",
| |
| :facts => true,
| |
| :storeconfigs => true,
| |
| :timeout => 3,
| |
| </pre>
| |
| | |
| Finally we tell Puppet Master to use ENC:
| |
| <pre>
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/external_nodes /etc/puppet/node.rb
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/node_terminus exec
| |
| </pre>
| |
| ==== Foreman Reports ====
| |
| | |
| We use the foreman report form github.com/theforeman project downloaded earlier:
| |
| <pre>
| |
| cp /tmp/puppet-foreman/templates/foreman-report.rb.erb \ /usr/lib/ruby/site_ruby/1.8/puppet/reports/foreman.rb
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/reports foreman
| |
| </pre>
| |
| | |
| ===== Enable Foreman-proxy features =====
| |
| <pre>
| |
| sed -i -r 's/(:puppetca:).*/\1 true/' /etc/foreman-proxy/settings.yml
| |
| sed -i -r 's/(:puppet:).*/\1 true/' /etc/foreman-proxy/settings.yml
| |
| </pre>
| |
| | |
| ===== Activate & run services =====
| |
| <pre>
| |
| chkconfig foreman-proxy on
| |
| service foreman-proxy start
| |
| chkconfig foreman on
| |
| service foreman start
| |
| </pre>
| |
| | |
| Foreman should be accessible at http://host1.example.org:3000.
| |
| | |
| {{admon/note|Note|The default user is “admin” and with the password “changeme”.}}
| |
| | |
| === Optional Mysql Backend ===
| |
| | |
| Let's get the DBMS and active the service by default:
| |
| <pre>
| |
| yum install -y mysql-server
| |
| chkconfig mysqld on
| |
| service mysqld start
| |
| </pre>
| |
| | |
| Then we initialise the mysql database:
| |
| <pre>
| |
| MYSQL_ADMIN_PASSWD='mysql'
| |
| /usr/bin/mysqladmin -u root password "${MYSQL_ADMIN_PASSWD}"
| |
| /usr/bin/mysqladmin -u root -h $(hostname) password "${MYSQL_ADMIN_PASSWD}"
| |
| </pre>
| |
| | |
| ==== Puppet database ====
| |
| | |
| We need to create a Puppet database and grant permission to it's user, “puppet”:
| |
| | |
| The following command will do that for us.
| |
| | |
| {{admon/note|Note|Change the MYSQL_PUPPET_PASSWD variable to assign the password of your choice.}}
| |
| | |
| {{admon/note|Note|The command will prompt for the MYSQL_ROOT_PASSWD we set-up earlier.}}
| |
| | |
| <pre>
| |
| MYSQL_PUPPET_PASSWD='puppet'
| |
| echo "create database puppet; GRANT ALL PRIVILEGES ON puppet.* TO puppet@localhost IDENTIFIED BY '$MYSQL_PUPPET_PASSWD'; commit;" | mysql -u root -p
| |
| </pre>
| |
| | |
| Finally we adjust the /etc/puppet/puppet.conf file for mysql.
| |
| | |
| {{admon/note|Note|We reuse here the MYSQL_PUPPET_PASSWD assigned before.}}
| |
| | |
| <pre>
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/storeconfigs true
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbadapter mysql
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbname puppet
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbuser puppet
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbpassword \ $MYSQL_PUPPET_PASSWD
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbserver localhost
| |
| augtool -s set /files/etc/puppet/puppet.conf/master/dbsocket \ /var/lib/mysql/mysql.sock
| |
| </pre>
| |
| | |
| ==== Foreman database ====
| |
| | |
| First off we need the mysql gems for foreman:
| |
| <pre>
| |
| yum -y install foreman-mysql*
| |
| </pre>
| |
| | |
| | |
| We need to configure foreman to make good use of our Mysql Puppet database.
| |
| | |
| Modify the /etc/foreman/database.yml file so the production section looks like this:
| |
| <pre>
| |
| production:
| |
| adapter: mysql2
| |
| database: puppet
| |
| username: puppet
| |
| password: puppet
| |
| host: localhost
| |
| socket: "/var/lib/mysql/mysql.sock"
| |
| </pre>
| |
| | |
| | |
| And then foreman to populate the database:
| |
| | |
| <pre>
| |
| cd /usr/share/foreman && RAILS_ENV=production rake db:migrate
| |
| </pre>
| |
| | |
| ==== Mysql Optimisation ====
| |
| | |
| For optimisation, the following which is optional, should be done only once puppet database has been created and populated.
| |
| | |
| Run the following create index command, you'll be prompted for the MYSQL_PUPPET_PASSWD password specified earlier:
| |
| | |
| <pre>
| |
| echo “create index exported_restype_title on resources (exported, restype, title(50));” | mysql -u root -p -D puppet
| |
| </pre>
| |
| | |
| == Set-up Foreman ==
| |
| === Smart-Proxy ===
| |
| | |
| Once Foreman-proxy and Foreman services are up and running, we need to link them together.
| |
| | |
| First, let's log into Foreman GUI:
| |
| | |
| [[File:Foreman-login.png|400px]]
| |
| | |
| Then select “More -> Configuration -> Smart Proxies” in the menu located on the top end right.
| |
| And select the “New Proxy” button.
| |
| Add the definitions for a new proxy.
| |
| * The name is only a description
| |
| * The URL should match your FQDN management host and the smart-proxy port 8443. Use https or http depending if SSL is configured. By default SSL is configured in the automated installation and is not for the manual installation.
| |
| | |
| Then select the “Submit” button to validate.
| |
| For example:
| |
| | |
| [[File:Foreman-new-proxy.png|400px]]
| |
| | |
| === Import OpenStack Puppet Modules ===
| |
| | |
| We need to download the Opentstack Puppet modules from the github project. All the OpenStack components are installed from those modules:
| |
| | |
| <pre>
| |
| git clone --recursive https://github.com/gildub/puppet-openstack.git /etc/puppet/modules/production
| |
| </pre>
| |
| | |
| | |
| Along with the nova-compute and nova-controller installer:
| |
| <pre>
| |
| git clone https://bitbucket.org/gildub/trystack.git
| |
| </pre>
| |
| | |
| We import the Puppet modules into Foreman using either:
| |
| | |
| * The GUI: Select “More -> Configuration -> Puppet classes” and click “Import from <your_smart_proxy>” button:
| |
| | |
| [[File:Foreman-import.png|400px]]
| |
| | |
| * Command line:
| |
| <pre>
| |
| cd /usr/share/foreman && rake puppet:import:puppet_classes RAILS_ENV=production
| |
| </pre>
| |
| | |
| {{admon/note|Note|To use with scripts, you can add the “batch” option to the rake import command:
| |
| <pre>rake puppet:import:puppet_classes[batch]</pre>
| |
| }}
| |
| | |
| === Parameters ===
| |
| | |
| We must provide all the parameters required by the OpenStack puppet modules in order to configure the different components with those values.
| |
| Here is the list of all the parameters to be defined into Foreman:
| |
| | |
| {|
| |
| ! Name !! Value
| |
| |-
| |
| | nova_db_password || changeme
| |
| |-
| |
| | verbose || true
| |
| |-
| |
| | mysql_root_password || changeme
| |
| |-
| |
| | keystone_db_password ||changeme
| |
| |-
| |
| |glance_db_password || changeme
| |
| |-
| |
| | nova_db_password || changeme
| |
| |-
| |
| | keystone_admin_token || secret
| |
| |-
| |
| | admin_email || admin@example.org
| |
| |-
| |
| | admin_password || changeme
| |
| |-
| |
| | glance_user_password || changeme
| |
| |-
| |
| | nova_user_password || changeme
| |
| |-
| |
| | glance_user_password || changeme
| |
| |-
| |
| | private_interface || em1*
| |
| |-
| |
| | public_interface || em2*
| |
| |-
| |
| | fixed_network_range || 10.100.10.0/24
| |
| |-
| |
| | floating_network_range || 8.21.28.128/25
| |
| |-
| |
| | horizon_secret_key || secret
| |
| |-
| |
| | controller_node_public || 10.100.0.2
| |
| |}
| |
| | |
| {{admon/note|*|Adjust those values according to your network configuration}}
| |
| | |
| Using Foreman GUI, go to “More -> Configuration -> Global Parameters” and “Add Parameter” in order to create all parameters described in the previous table:
| |
| | |
| [[File:Foreman-global-parameters.png|400px]]
| |
| | |
| === Hosts Groups ===
| |
| Host Groups are an easy way to group Puppet class modules and parameters. A host, when attached to a Host Group automatically inherits those definitions.
| |
| We manage the two OpenStack types of server using Foreman Host Groups.
| |
| | |
| So, we need to create two Host Groups:
| |
| * OpenStack-Controller
| |
| * OpenStack Compute Nodes
| |
| | |
| To create a Host Group:
| |
| # Select the menu entry “More -> Configuration -> Host Groups”
| |
| # Provide:
| |
| #* The name
| |
| #* The environment: Production is the default
| |
| #* The smart-proxy: Use the one created previously
| |
| | |
| So we create the first Host Group, "openstack-controller" and validate by selecting the “Submit” button at the bottom of the page:
| |
| | |
| [[File:Foreman-new-hostgroup.png|400px]]
| |
| | |
| We repeat the same operation to create the second Host Group, "openstack-compute":
| |
| | |
| [[File:Foreman-openstack-hostgroups.png|400px]]
| |
| | |
| Finally, we need to associate the OpenStack Controller and the OpenStack Compute classes
| |
| respectively to the two Host Groups we have created.
| |
| | |
| === OpenStack Controller ===
| |
| To define the OpenStack Controller Host Group,
| |
| Edit the OpenStack-Controller Host Group and use the “Puppet Classes” tab and select the "TryStack class".
| |
| Activate the trystack and trystack::controller classes by clicking on the "+" icon.
| |
| | |
| [[File:Foreman-openstack-controller.png|400px]]
| |
| | |
| === OpenStack Compute ===
| |
| To define the OpenStack Compute Host Group,
| |
| Edit the openStack-compute Host Group and activate the trystack and trystack::compute classes:
| |
| | |
| [[File:Foreman-openstack-compute.png|400px]]
| |
| | |
| == Manage a Host ==
| |
| | |
| To make a system part of our OpenStack infrastructure we have to:
| |
| * Make sure the host follows the Common Core definitions – See RHEL Core: Common definitions section above
| |
| * Have the host's certificate signed so it's registered with the Management server
| |
| * Assign the host either the openstack-controller or openstack-compute Host Group
| |
| | |
| === Register Host Certificates ===
| |
| ==== Using Autosign ====
| |
| With autosign option, the hosts can be automatically registered and visible from Foreman by
| |
| adding the hostnames to the /etc/puppet/autosign.conf file.
| |
| ==== Signing Certificates ====
| |
| If you're not using the autosign option then you will have to sign the host certificate, using either:
| |
| * Foreman GUI
| |
| Get on the Smart Proxies window from the menu "More -> Configuration -> Smart Proxies".
| |
| And select the "Certificates" from the drop-down button of the smart-proxy you created:
| |
| | |
| [[File:Foreman-proxies.png|400px]]
| |
| | |
| From there you can manage all the hosts certificates and get them signed.
| |
| | |
| * The Command Line Interface
| |
| Assuming the Puppet agent (puppetd) is running on the host, the host certificate would have
| |
| been created on the Puppet Master and will be waiting to be signed:
| |
| From the Puppet Master host, use the “puppetca” tool with the command “list” to see the waiting
| |
| certificates, for example:
| |
| | |
| <pre>
| |
| # puppetca list
| |
| "host3.example.org" (84:AE:80:D2:8C:F5:15:76:0A:1A:4C:19:A9:B6:C1:11)
| |
| </pre>
| |
| | |
| To sign a certificate, use the “sign” command and provide the hostame, for example:
| |
| <pre>puppetca sign host3.example.org</pre>
| |
| | |
| === Assign a Host Group ===
| |
| Display the hosts using the “Hosts” button at the top Foreman GUI screen.
| |
| | |
| Then select the corresponding “Edit Host” drop-down button on the right side of the targeted host.
| |
| | |
| Assign the right environment and attach the appropriate Host Group to that host in order to make
| |
| it a Controller or a Compute node.
| |
| | |
| [[File:Foreman-host-hostgroup.png|400px]]
| |
| | |
| Save by hitting the “Submit” button.
| |
| | |
| === Deploy OpenStack Components ===
| |
| | |
| We are done!
| |
| | |
| The OpenStack components will be installed when the Puppet agent synchronises with the
| |
| Management server. Effectively, the classes will be applied when the agent retrieves the catalog
| |
| from the Master and runs it.
| |
| | |
| You can also manually trigger the agent to check with the puppetmaster, to do so deactivate the agent on the targeted controller node run:
| |
| <pre>service puppet stop</pre>
| |
| | |
| And run it manually:
| |
| <pre>puppet agent –verbose --no-daemonize</pre>
| |