Getting started with OpenStack Nova

From FedoraProject

Jump to: navigation, search


See Also

Note the following describes the OpenStack Diablo compute service on Fedora 16.

There are more extensive notes on the newer OpenStack Essex release in Getting started with OpenStack on Fedora 17 and Getting started with OpenStack EPEL.

Initial Installation

To get started with OpenStack's Compute Service (Nova), you can install it on Fedora 16:

$> sudo yum install --enablerepo=updates-testing openstack-nova

Run the helper script to get MySQL configured for use with openstack-nova. If mysql-server is not already installed, this script will install it for you.

$> sudo openstack-nova-db-setup

Nova requires the RabbitMQ AMQP messaging server to be running.

$> sudo service rabbitmq-server start
$> sudo chkconfig rabbitmq-server on

Nova requires the libvirtd server to be running:

$> sudo service libvirtd start
$> sudo chkconfig libvirtd on

Next, you should enable the Glance API and registry services:

$> for svc in api registry; do sudo service openstack-glance-$svc start; done
$> for svc in api registry; do sudo chkconfig openstack-glance-$svc on; done

The openstack-nova-volume service requires an LVM Volume Group called nova-volumes to exist. We simply create this using a loopback sparse disk image.

$> sudo dd if=/dev/zero of=/var/lib/nova/nova-volumes.img bs=1M seek=20k count=0
$> sudo vgcreate nova-volumes $(sudo losetup --show -f /var/lib/nova/nova-volumes.img)

If you are testing OpenStack in a virtual machine, you need to configure nova to use qemu without KVM and hardware virtualization:

$> echo '--libvirt_type=qemu' | sudo tee -a /etc/nova/nova.conf

Now you can start the various services:

$> for svc in api objectstore compute network volume scheduler; do sudo service openstack-nova-$svc start; done
$> for svc in api objectstore compute network volume scheduler; do sudo chkconfig openstack-nova-$svc on; done

Check that all the services started up correctly and look in the logs in /var/log/nova for errors. If there are none, then Nova is up and running! Note the network service should only be started on a single node, when setting up multiple compute nodes.

Admin User, Project and Network Setup

Now you should create an admin user, project and network. I'm going to name them all after myself:

$> sudo nova-manage user admin markmc
$> sudo nova-manage project create markmc markmc
$> sudo nova-manage network create markmc 1 256 --bridge=br0

Then download a set of credentials for this user/project:

$> sudo nova-manage project zipfile markmc markmc
$> sudo chmod 600
$> sudo chown markmc:markmc

Unpack the credentials, source the novarc and add an SSH keypair:

$> mkdir novacreds && cd novacreds
$> unzip ../
$> . ./novarc
$> euca-add-keypair nova_key > nova_key.priv
$> chmod 600 nova*


To run an instance, you're going to need an image. Two options are described below:

  1. Building a Fedora 16 image using Oz
  2. Downloading ttylinux based minimal images used by OpenStack developers for testing

Building an Image With Oz

You can very easily build an image using Oz. First, make sure it's installed:

$> sudo yum install /usr/bin/oz-install

Create a template definition file called f16.tdl containing:

 <description>My Fedora 16 x86_64 template</description>
  <install type='url'>
   <command name='setup-rc-local'>
sed -i 's/rhgb quiet/console=ttyS0/' /boot/grub/grub.conf
cat >> /etc/rc.local &lt;&lt; EOF
if [ ! -d /root/.ssh ]; then
  mkdir -p /root/.ssh
  chmod 700 /root/.ssh
# Fetch public key using HTTP
while [ ! -f /root/.ssh/authorized_keys ]; do
    curl -f > /tmp/aws-key 2>/dev/null
    if [ \$? -eq 0 ]; then
        cat /tmp/aws-key >> /root/.ssh/authorized_keys
        chmod 0600 /root/.ssh/authorized_keys
        restorecon /root/.ssh/authorized_keys
        rm -f /tmp/aws-key
        echo "Successfully retrieved AWS public key from instance metadata"
        FAILED=\$((\$FAILED + 1))
        if [ \$FAILED -ge \$ATTEMPTS ]; then
            echo "Failed to retrieve AWS public key after \$FAILED attempts, quitting"
        echo "Could not retrieve AWS public key (attempt #\$FAILED/\$ATTEMPTS), retrying in 5 seconds..."
        sleep 5

Then simply do:

$> sudo oz-install -d4 -u f16.tdl

Once built, you simply have to register the image with Nova:

$> sudo nova-manage image image_register /var/lib/libvirt/images/fedora16_x86_64.dsk markmc f16
$> glance index

The last command should return a list of the images registered with the Glance image registry.

Downloading Existing Images

If you don't want to build an image, just download this set of images commonly used by OpenStack developers for testing and register them with Nova:

$> mkdir images
$> cd images
$> curl -L | tar xvfzo -
$> cd ..
$> sudo nova-manage image convert images/

You can also try using Fedora 16 EC2 images:

$> wget
$> tar --xz -xvf Fedora-16-ec2-20111101-x86_64-sda.raw.tar.xz

This image doesn't have normal bootloader, you can add it with the following script:

set -x
kpartx -av $raw | ( read blah blah map mm s e t loopdev p
# add map loop0p1 (253:9): 0 19531250 linear /dev/loop0 1
# loopdev=/dev/loop0

parted $loopdev set 1 boot on
cat /usr/share/syslinux/mbr.bin > $loopdev
mkdir -p /mnt/img
mount $map /mnt/img
cd /mnt/img
cat > extlinux.conf <<EOF
say This is the Fedora 16 ec2 image
default linux1
timeout 300
label linux1
  kernel $(ls boot/vmlinuz*)
  append initrd=$(ls boot/initramfs*) root=UUID=$(blkid -s UUID -o value $map) rootfstype=auto ro nomodeset rootflags=ro
extlinux --install /mnt/img
cd ~
umount /mnt/img
kpartx -d $raw

And finally convert it to qcow2 compressed format:

$> qemu-img convert -c -p -f raw -O qcow2 Fedora-16-ec2-20111101-x86_64-sda.raw Fedora-16-ec2-20111101-x86_64-sda.qcow2

Launch an Instance

As a last step before launching, make sure the nbd kernel module is loaded so that injecting SSH key files into the filesystem on the qcow2 image works:

$> sudo modprobe nbd

You should now be able to launch an image:

$> euca-run-instances f15 -k nova_key

Or, in the case of the downloaded images:

$> euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key

And then observe the instance running, observe the KVM VM running and SSH into the instance:

$> euca-describe-instances
$> sudo virsh list
$> ssh -i nova_key.priv root@
$> euca-get-console-output i-00000001
$> euca-terminate-instances i-00000001


If you use the Chrome browser, kill it before embarking on this section, as it has been known to cause the lvcreate command to fail with 'incorrect semaphore state' errors.

Start the SCSI target daemon

$> sudo service tgtd start
$> sudo chkconfig tgtd on

Create a new 1GB volume

$> VOLUME=$(euca-create-volume -s 1 -z nova | awk '{print $2}')

View the status of the new volume, and wait for it to become 'available'

$> watch "euca-describe-volumes | grep $VOLUME | grep available"

Re-run the previously terminated instance if necessary:

$> INSTANCE=$(euca-run-instances f15 -k nova_key | grep INSTANCE | awk '{print $2}')


$> INSTANCE=$(euca-run-instances ami-tty --kernel aki-tty --ramdisk ari-tty -k nova_key | grep INSTANCE | awk '{print $2}')

Make the storage available to the instance (note -d is the device on the compute node)

$> euca-attach-volume -i $INSTANCE -d /dev/vdc $VOLUME

ssh to the instance and verify that the vdc device is listed in /proc/partitions

$> cat /proc/partitions

Now make the device available if /dev/vdc is not already present

$> mknod /dev/vdc b 252 32

Create and mount a file system directly on the device

$> mkfs.ext3 /dev/vdc
$> mkdir /mnt/nova-volume
$> mount /dev/vdc /mnt/nova-volume

Display some file system details

$> df -h /dev/vdc

Create a temporary file:

$> echo foo > /mnt/nova-volume/bar

Terminate and re-run the instance, then re-attach the volume and re-mount within the instance as above. Your temporary file will have persisted:

$> cat /mnt/nova-volume/bar

Unmount the volume again:

$> umount /mnt/nova-volume

Exit from the ssh session, then detach and delete the volume:

$> euca-detach-volume $VOLUME
$> euca-delete-volume $VOLUME

Floating IPs

You may carve out a block of public IPs and assign them to instances.

First thing you need to do is make sure that nova is configured with the correct public network interface. The default is eth0, but you can change it by e.g.

$> sudo bash -c 'echo "--public_interface=em1" >> /etc/nova/nova.conf'
$> sudo service openstack-nova-network restart

Then you can do e.g.

$> sudo nova-manage floating create
$> euca-allocate-address
$> euca-associate-address -i i-00000012
$> ssh -i nova_key.priv root@
$> euca-disassociate-address
$> euca-release-address

Smoke Tests

Nova comes with a selection of fairly basic smoke tests which you can run against your installation. It can be useful to use these to sanity check your configuration.

First off, you need the nova-adminclient python library which isn't yet packaged:

$> sudo yum install python-pip
$> sudo pip-python install nova-adminclient

Then you need a user and project both named admin:

$> sudo nova-manage user admin admin
$> sudo nova-manage project create admin admin
$> sudo nova-manage project zipfile admin admin
$> unzip
$> . ./novarc

Make sure you have the tty images imported as described above. You also need a block of floating IPs created, also as described above.

Then, run the tests from a fedpkg checkout:

$> fedpkg clone openstack-nova
$> cd openstack-nova
$> fedpkg switch-branch f16
$> fedpkg prep
$> cd nova-2011.3/smoketests
$> python ./

All the tests should pass.

If you run into import errors such as:

ImportError: No module named nose


ImportError (No module named paramiko)

simply install the missing dependency as follows:

$> sudo yum install -y python-nose.noarch
$> sudo yum install -y python-paramiko.noarch

Manual Setup of MySQL

As of openstack-nova-2011.3-9.el6 and openstack-nova-2011.3-8.fc16, openstack-nova is now set up to use MySQL by default. If you're updating an older installation or prefer to set up MySQL manually instead of using the openstack-nova-db-setup script, this section shows how to do it.

First install and enable MySQL:

$> sudo yum install -y mysql-server
$> sudo service mysqld start
$> sudo chkconfig mysqld on

Set a password for the root account and delete the anonymous accounts:

$> mysql -u root
mysql> update mysql.user set password = password('iamroot') where user = 'root';
mysql> delete from mysql.user where user = '';

Create a database and user account specifically for nova:

mysql> create database nova;
mysql> create user 'nova'@'localhost' identified by 'nova';
mysql> create user 'nova'@'%' identified by 'nova';
mysql> grant all on nova.* to 'nova'@'%';

(If anyone can explain why nova@localhost is required even though the anonymous accounts have been deleted, I'd be very grateful :-)

Then configure nova to use the DB and install the schema:

$> echo '--sql_connection=mysql://nova:nova@localhost/nova' | sudo tee -a /etc/nova/nova.conf
$> sudo nova-manage db sync

As a final sanity check:

$> mysql -u nova -p nova
Enter password:
mysql> select * from migrate_version;

Adding a Compute Node

Okay, everything so far has been done on a single node. The next step is to add another node for running VMs.

Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.

First, open the rabbitmq, MySQL, Nova API and iSCSI ports on controller:

$ controller> sudo lokkit -p 3306:tcp
$ controller> sudo lokkit -p 5672:tcp
$ controller> sudo lokkit -p 9292:tcp
$ controller> sudo lokkit -p 3260:tcp
$ controller> sudo service libvirtd reload

Then make sure that ntp is enabled on both machines:

$> sudo yum install -y ntp
$> sudo service ntpd start
$> sudo chkconfig ntpd on

Install libvirt and nova on node:

$ node> sudo yum install --enablerepo=updates-testing openstack-nova
$ node> sudo service libvirtd start
$ node> sudo chkconfig libvirtd on
$ node> sudo setenforce 0

Configure nova so that node can find the services on controller:

$ node> sudo bash -c 'echo "--rabbit_host=controller" >> /etc/nova/nova.conf'
$ node> sudo bash -c 'echo "--sql_connection=mysql://nova:nova@controller/nova" >> /etc/nova/nova.conf'
$ node> sudo bash -c 'echo "--glance_api_servers=controller:9292" >> /etc/nova/nova.conf'
$ node> sudo bash -c 'echo "--iscsi_ip_prefix=" >> /etc/nova/nova.conf'

(The {{{iscsi_ip_prefix}}} value is the IP address of the controller node)

Enable the compute service:

$ node> for svc in compute network; do sudo service openstack-nova-$svc start; done

Finally, you need to make sure the network is configured with a physical bridge interface:

$ controller> sudo nova-manage network create markmc --bridge=br0 --bridge_interface=em1

Now everything should be running as before, except the VMs are launched either on controller or node.


While testing OpenStack, you might want to delete everything related to OpenStack and start testing with a clean slate again.

Here's how. First, make sure to terminate all running instances:

$> euca-terminate-instances ...

Double check that you have no lingering VMs, perhaps saved to disk:

$> virsh list --all && virsh undefine
$> rm -f /var/lib/libvirt/qemu/save/instance-00000*

Then stop all the services:

$> for iii in api objectstore compute network volume scheduler; do sudo service openstack-nova-$iii stop; done
$> for iii in api registry; do sudo service openstack-glance-$iii stop; done

Delete all the packages:

$> sudo yum erase python-glance python-nova python-novaclient openstack-keystone openstack-swift*

Delete the nova table from the MySQL DB:

$> mysql -u root -p -e 'drop database nova;'

Delete the nova-volumes VG:

$> sudo vgchange -an nova-volumes
$> sudo losetup -d /dev/loop0
$> sudo rm -f /var/lib/nova/nova-volumes.img

Take down the bridge and kill dnsmasq:

$> sudo ip link set br0 down
$> sudo brctl delbr br0
$> sudo kill -9 $(cat /var/lib/nova/networks/

Remove all directories left behind from the packages:

$> sudo rm -rf /etc/{glance,nova,swift,keystone} /var/lib/{glance,nova,swift,keystone} /var/log/{glance,nova,swift,keystone} /var/run/{glance,nova,swift,keystone}

Finally, restart iptables to clear out all rules added by Nova. You also need to reload libvirt's iptables rules:

$> sudo service iptables restart
$> sudo service libvirtd restart