QA:Testcase Nova Create Bootable Volume

From FedoraProject

(Difference between revisions)
Jump to: navigation, search
m (get cirros-0.3.0-x86_64-rootfs.img.gz from launchpad.net)
(Update to grizzly functionality)
 
Line 7: Line 7:
 
We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.
 
We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.
  
We also need a rootfs-style image, which may be download from:
+
We also need a bootable image, which may be download from:
  
  $> wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz
+
  $> wget https://launchpadlibrarian.net/83305348/cirros-0.3.0-x86_64-disk.img
  
 
Finally, we assume that the nova-volume service or cinder is enabled and running.
 
Finally, we assume that the nova-volume service or cinder is enabled and running.
  
 
|actions=
 
|actions=
 +
 +
Upload the image to glance:
 +
 +
$> glance add name=cirros_boot is_public=true disk_format=qcow2 container_format=bare < ./cirros-0.3.0-x86_64-disk.img
 +
$> <nowiki>IMAGE_ID=$(glance image-list | awk '/cirros-boot/ {print $2}')</nowiki>
  
 
Create a 1Gb volume, which we will make bootable:
 
Create a 1Gb volume, which we will make bootable:
  
  $> cinder create --display_name=bootable_volume 1
+
  $> cinder create --image-id $IMAGE_ID --display_name=bootable_cirros 1
  $> <nowiki>VOLUME_ID=$(cinder list | awk '/bootable_volume/ {print $2}')</nowiki>
+
  $> <nowiki>VOLUME_ID=$(cinder list | awk '/bootable_cirros/ {print $2}')</nowiki>
  
 
and wait for the volume to become available:
 
and wait for the volume to become available:
  
  $> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
+
  $> <nowiki>watch "cinder show bootable_cirros | grep status"</nowiki>
 
+
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
+
 
+
$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb
+
 
+
Wait for the volume status to show as in-use:
+
 
+
$> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
+
 
+
Format and mount volume to a staging mount point:
+
 
+
$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
+
set -o errexit
+
set -o xtrace
+
sudo mkdir -p /tmp/stage
+
sudo mkfs.ext3 -b 1024 /dev/vdb 1048576
+
sudo mount /dev/vdb /tmp/stage
+
sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
+
sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
+
EOF
+
 
+
Copy image to the staging directory on the builder instance:
+
 
+
$> scp -o StrictHostKeyChecking=no -i nova_key.priv cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage
+
 
+
Unpack image into the volume (don't worry about an unmount failure).
+
 
+
$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
+
set -o errexit
+
set -o xtrace
+
cd /tmp/stage
+
sudo mkdir -p /tmp/image
+
sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz
+
sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image
+
sudo cp -pr /tmp/image/* /tmp/stage/
+
cd
+
sync
+
sudo umount /tmp/image
+
<nowiki>sudo umount /tmp/stage || true</nowiki>
+
EOF
+
 
+
Detach volume for the builder instance:
+
 
+
$> nova volume-detach $INSTANCE $VOLUME_ID
+
 
+
and wait for the volume status to show as availble:
+
 
+
$> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
+
  
 
Now snapshot the bootable volume we just created:
 
Now snapshot the bootable volume we just created:
Line 83: Line 40:
 
Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.
 
Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.
  
$> <nowiki>IMAGE_ID=$(glance image-list | grep $(nova show $INSTANCE | awk '/image/ {print $4}') | awk '{print $2}')</nowiki>
+
  $> nova boot --flavor 1 --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed
  $> nova boot --flavor 1 --image $IMAGE_ID --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed
+
  
 
|results=
 
|results=

Latest revision as of 16:26, 2 April 2013

Contents

Description

Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.

We construct a bootable volume, then fire up an instance backed by this volume.

Setup

We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.

We also need a bootable image, which may be download from:

$> wget https://launchpadlibrarian.net/83305348/cirros-0.3.0-x86_64-disk.img

Finally, we assume that the nova-volume service or cinder is enabled and running.

How to test

Upload the image to glance:

$> glance add name=cirros_boot is_public=true disk_format=qcow2 container_format=bare < ./cirros-0.3.0-x86_64-disk.img
$> IMAGE_ID=$(glance image-list | awk '/cirros-boot/ {print $2}')

Create a 1Gb volume, which we will make bootable:

$> cinder create --image-id $IMAGE_ID --display_name=bootable_cirros 1
$> VOLUME_ID=$(cinder list | awk '/bootable_cirros/ {print $2}')

and wait for the volume to become available:

$> watch "cinder show bootable_cirros | grep status"

Now snapshot the bootable volume we just created:

$> cinder snapshot-create --display_name bootable_snapshot $VOLUME_ID

and wait for the snapshot to become available:

$> watch "cinder snapshot-show bootable_snapshot"
$> SNAPSHOT_ID=$(cinder snapshot-list | awk '/bootable_snapshot/ {print $2}')

Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.

$> nova boot --flavor 1 --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed

Expected Results

You should be able able to ssh into the volume-backed instance.

Also note that for the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:

$> cinder list