From Fedora Project Wiki

(Add upgrade note)
m (Add Category)
Line 234: Line 234:
[[Category:Infrastructure SOPs]]

Revision as of 04:33, 14 February 2009

TurboGears - SOP

Contact Information

Owner: Fedora Infrastructure Team

Contact: #fedora-admin

Persons: abadger1999, ricky, lmacken

Location: Phoenix

Servers: app3 and app4, puppet1

Purpose: Provide Web In-House Web Applications for our users


We have many TurboGears applications deployed in our infrastructure. This SOP and the Supervisor SOP are explain how TurboGears apps are deployed.

Deploying a new App

These instructions will help you setup a load balanced turbogears application that runs on a URL of the form:

Configuration of the new application is done on puppet1. If you need to drop rpms of the application into the fedora infrastructure repository (because they are not available in Fedora), that presently occurs on lockbox.

Add RPMs to the Fedora Infrastructure Repo

1. Copy the rpms to lockbox 2. Sign the rpms with the Fedora Infrastructure Key

rpm --add-sign foo-1.0-1.el5.*.rpm

3. Copy the rpms to the repo directory

mv foo-1.0-1.el5.src.rpm /netapp/app/fi-repo/el/5/SRPMS/
mv foo-1.0-1.el5.x86_64.rpm /netapp/app/fi-repo/el/5/x86_64/

4. Run createrepo to regenerate the repo metadata

cd /netapp/app/fi-repo/el/5/SRPMS/
sudo createrepo .
cd /netapp/app/fi-repo/el/5/x86_64/
sudo createrepo .


Configure the application

First log into puppet1 and checkout the repositories our configs are stored in:

$ CVSROOT=/cvs/puppet cvs co manifests
$ CVSROOT=/cvs/puppet cvs co configs

Create the manifest

1. cd manifests/services 2. create a file named myapp.pp with something similar to the following:

class myapp-proxy inherits httpd {
apachefile { "/etc/httpd/conf.d/":
source => 'web/myapp-proxy.conf'

This defines a class that we'll add to the proxy servers to send requests to the application running on the app servers. 3. Continue editing myapp.pp and add something like the following:

class myapp-server inherits turbogears {

include supervisor

package { myapp:
ensure => latest,
templatefile { '/etc/myapp.cfg':
content => template('/var/lib/puppet/config/web/applications/myapp-prod.cfg.erb'),
notify => Service['supervisord'] ,
owner => 48,
mode => '640'

This defines a server class that we'll add to the app servers. The package definition uses the name of your application's rpm package to install from a yum repo and get required dependencies. If you are developing and building the application yourself and have control over when new releases make it to the yum repo, set ensure => latest to automatically get the latest version otherwise set ensure => present so we can vette the latest releases before installing them on the server.

Now that we've defined the files and packages our app uses we need to define which machines the files and packages belong on.

1. cd ~/manifests/servergroups 2. If this application is going to run on the RHEL app servers edit appRhel.pp; if it's going to run on the Fedora app servers edit appFc.pp. In either case we're just including the new server class in the file:

class appRhel {
include pkgdb-server
include myapp-server

3. Next edit the manifest for the proxy servers, proxy.pp:

class proxy {
include pkgdb-proxy
include myapp-proxy

That's it for the manifests, now we need to create the config files we reference in the manifest file.

Create the proxy config

1. cd ~/configs/web 2. create myapp-proxy.conf and put the following into the file:

<Location /myapp>
RequestHeader set CP-Location /myapp

<Location ~ /myapp/(static|tg_js)>
Header unset Set-Cookie

RewriteEngine On
RewriteRule ^/myapp(.*)      balancer://myappCluster/myapp$1 [P] 

The first section tells CherryPy that it's running under the /myapp/ directory.

The second, unsets cookies when requesting static resources. If you have other directories of all static files (images, css, javascript, raw html, etc) include them in the regexp. This will allow us to setup caching of these directories in the next step.

The last section makes all requests with /myapp as the base directory go to the servers setup in the balancer config file. 3. Edit balancer.conf to tell the proxy server what app servers to send requests to. Add something like this:

<Proxy balancer://myappCluster>
BalancerMember timeout=3
BalancerMember timeout=3

Currently we have two app servers running RHEL and two servers running Fedora. If your application is going to run on the RHEL servers, use app1 and app2. If it's going to run on Fedora, use app3 and app4. The port number is the one that your TurboGears app is listening on. If you haven't allocated one yet, look at the PortRegistry to see what's available. This port may also need to be added to the iptables rules in appFc.pp or appRhel.pp.


As mentioned in the last section, we have the ability to cache static files for our TurboGears apps. 1. cd ~/configs/web/ 2. edit modcache.conf and add a CacheEnable line for every directory we can cache like so:

CacheEnable disk /myapp/tg_js/
CacheEnable disk /myapp/static/

Remember that if you list a directory in this file, you *must* unset any cookies on the page in the myapp-proxy.conf file. If you don't the cache will distribute cookies for people's sessions to the wrong clients leading to people being logged in as someone else.

Application config file

The final piece is to create a config file template for your app. 1. cd ~/web/applications/ 2. edit myapp-prod.cfg.erb

You should look at other application's config files and the one you've been using for testing locally. A few things to note:

  • This file is a template. So using:
<%= myappDatabasePassword %>

will substitute the password from the config file into the template. This keeps passwords out of the configs repository and thus keeps them from being logged to a publicly readable list.

  • server.socket_port should be set to the same port you used in balancer.conf
  • The following settings seem to yield reasonable performance. These are good defaults until you have a chance to test and refine the settings:

  • Remember to set server.environment="production" instead of "development".
  • Since the app will be running under /myapp, and behind a proxy, make sure the following are set correctly:
base_url_filter.on = True
base_url_filter.use_x_forwarded_host = True
base_url_filter.base_url = ""

Configure supervisor

Supervisor starts our applications.

1. Log into puppet1 3. cd configs/web/applications 4. edit supervisord.conf. You want to add a new entry similar to this:

command=/usr/local/bin/ /usr/sbin/start-MYAPP /etc/MYAPP.cfg

Modify the MYAPP entries to fit your application.

[program:MYAPP] should contain a short, lowercase version of your program name. Supervisor commands will use this to identify your program (Like supervisorctl restart myapp). For more information about these commands, see the Supervisor SOP .

/usr/sbin/start-MYAPP should be the path to the script you use to start your application.

/etc/MYAPP.cfg is the path to the config file you use with your application.

Upgrading an App

First put the new packages in the infrastructure repo as noted above.

Then on puppet1 run:

sudo func 'app[1-5].fedora*' call command run 'yum clean metadata'
sudo func 'app[1-5].fedora*' call command run 'yum -y upgrade APPPKGNAME'
sudo func 'app[1-2].fedora*' call command run 'supervisorctl restart packagedb'
sudo func 'app[3-5].fedora*' call command run 'supervisorctl restart packagedb'

When running yum upgrade, make sure you specify the APPPKGNAME! We don't want to have yum upgrade every package on the box as, in many cases, we need to review the packages that will be updated instead of blindly applying them.

The first two commands upgrade the package on the app server.

The second two commands restart the app. We do it in two parts so that we always have some app servers ready to handle requests. This should avoid downtime.

After restarting the servers it may be necessary to clean the cache of static files. This is because javscript, css, and other static files are cached. If those reference things that are not available in the new server, then we will get errors. Cleaning the cache is done by rm'ing the cache on the proxy servers.

ssh proxy1
sudo su -
rm -rf /srv/cache/mod_cache/*

Troubleshooting and Resolution