QA:Testcase Nova Create Bootable Volume

From FedoraProject

(Difference between revisions)
Jump to: navigation, search
(Created page with "{{QA/Test_Case |description=Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2. We construct a bootable volume, fire up an instance backed by th...")
 
m (get cirros-0.3.0-x86_64-rootfs.img.gz from launchpad.net)
(19 intermediate revisions by 2 users not shown)
Line 2: Line 2:
 
|description=Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.
 
|description=Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.
  
We construct a bootable volume, fire up an instance backed by this volume, and then create a snapshot image of the volume-backed instance.
+
We construct a bootable volume, then fire up an instance backed by this volume.
  
 
|setup=
 
|setup=
We assume that an instance has already been booted in a previous test, and we use this as a builder to facilitate the creation of a bootable volume.
+
We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.
  
Capture the instance name as an environment variable:
+
We also need a rootfs-style image, which may be download from:
  
  INSTANCE=<your instance name>
+
$> wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz
 
+
We also need a rootfs-style image, which may be download from:
+
  
  $> wget http://images.ansolabs.com/cirros-0.3.0-x86_64-rootfs.img.gz
+
Finally, we assume that the nova-volume service or cinder is enabled and running.
  
 
|actions=
 
|actions=
Line 19: Line 17:
 
Create a 1Gb volume, which we will make bootable:
 
Create a 1Gb volume, which we will make bootable:
  
  $> nova volume-create --display_name=bootable_volume 1
+
$> cinder create --display_name=bootable_volume 1
  $> <nowiki>VOLUME_ID=$(nova volume-list | awk '/bootable_volume/ {print $2}')</nowiki>
+
$> <nowiki>VOLUME_ID=$(cinder list | awk '/bootable_volume/ {print $2}')</nowiki>
  
 
and wait for the volume to become available:
 
and wait for the volume to become available:
  
  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
+
$> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
  
 
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
 
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
  
  $> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb
+
$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb
  
 
Wait for the volume status to show as in-use:
 
Wait for the volume status to show as in-use:
  
  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
+
$> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
  
 
Format and mount volume to a staging mount point:
 
Format and mount volume to a staging mount point:
  
  $> ssh -o StrictHostKeyChecking=no cirros@10.0.0.2 << EOF
+
$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
  set -o errexit
+
set -o errexit
  set -o xtrace
+
set -o xtrace
  sudo mkdir -p /tmp/stage
+
sudo mkdir -p /tmp/stage
  sudo mkfs.ext3 -b 1024 /dev/vdb 1048576
+
sudo mkfs.ext3 -b 1024 /dev/vdb 1048576
  sudo mount /dev/vdb /tmp/stage
+
sudo mount /dev/vdb /tmp/stage
  sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
+
sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
  sudo chown cirros /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
+
sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
  EOF
+
EOF
  
 
Copy image to the staging directory on the builder instance:
 
Copy image to the staging directory on the builder instance:
  
  $> scp -o StrictHostKeyChecking=no cirros-0.3.0-x86_64-rootfs.img.gz cirros@10.0.0.2:/tmp/stage
+
$> scp -o StrictHostKeyChecking=no -i nova_key.priv cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage
  
 
Unpack image into the volume (don't worry about an unmount failure).
 
Unpack image into the volume (don't worry about an unmount failure).
  
  $> ssh -o StrictHostKeyChecking=no -i $KEY_FILE cirros@10.0.0.2 << EOF
+
$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
  set -o errexit
+
set -o errexit
  set -o xtrace
+
set -o xtrace
  cd /tmp/stage
+
cd /tmp/stage
  sudo mkdir -p /tmp/cirros
+
sudo mkdir -p /tmp/image
  sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz
+
sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz
  sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/cirros
+
sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image
  sudo cp -pr /tmp/cirros/* /tmp/stage/
+
sudo cp -pr /tmp/image/* /tmp/stage/
  cd
+
cd
  sync
+
sync
  sudo umount /tmp/cirros
+
sudo umount /tmp/image
  <nowiki>sudo umount /tmp/stage || true</nowiki>
+
<nowiki>sudo umount /tmp/stage || true</nowiki>
  EOF
+
EOF
  
 
Detach volume for the builder instance:
 
Detach volume for the builder instance:
  
  $> nova volume-detach $INSTANCE $VOLUME_ID
+
$> nova volume-detach $INSTANCE $VOLUME_ID
  
 
and wait for the volume status to show as availble:
 
and wait for the volume status to show as availble:
  
  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
+
$> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>
  
 
Now snapshot the bootable volume we just created:
 
Now snapshot the bootable volume we just created:
  
  $> nova volume-snapshot-create --display_name bootable_snapshot $VOLUME_ID
+
$> cinder snapshot-create --display_name bootable_snapshot $VOLUME_ID
  
 
and wait for the snapshot to become available:
 
and wait for the snapshot to become available:
  
  $> nova volume-snapshot-show bootable_snapshot
+
$> watch "cinder snapshot-show bootable_snapshot"
  $> <nowiki>SNAPSHOT_ID=$(nova volume-snapshot-list | awk '/bootable_snapshot/ {print $2}')</nowiki>
+
$> <nowiki>SNAPSHOT_ID=$(cinder snapshot-list | awk '/bootable_snapshot/ {print $2}')</nowiki>
  
Now we can boot from the bootable volume. We use the same image as the builder instance but that is only in order to retrieve the image properties (kernel and ramdisk IDs)
+
Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.
  
  $> <nowiki>IMAGE_ID=$(nova show $INSTANCE | awk '/image/ {print $5}' | sed 's,(\(.*\)),\1,')</nowiki>
+
$> <nowiki>IMAGE_ID=$(glance image-list | grep $(nova show $INSTANCE | awk '/image/ {print $4}') | awk '{print $2}')</nowiki>
  $> nova boot --flavor 1 --image $INSTANCE --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 volume_backed
+
$> nova boot --flavor 1 --image $IMAGE_ID --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed
 
+
Create an image fromt he running instance:
+
 
+
  $> nova image-create volume_backed volume_backed_image
+
 
+
Boot the image created from the running volume-backed instance:
+
 
+
  $> <nowiki>nova boot --flavor 1 --image $(glance image-list | awk '/volume_backed_image/ {print $2}') from_volume_backed</nowiki>
+
  
 
|results=
 
|results=
  
You should be able able to shh into both volume-backed instances.
+
You should be able able to ssh into the volume-backed instance.
 
+
Note the properties on the new image:
+
 
+
  $> glance image-show <nowiki>$(glance image-list | awk '/volume_backed_image/ {print $2}')</nowiki>
+
 
+
and the lack of image data (as that is taken from the volume snapshot).
+
 
+
Note also that the EC2 API reports the image as being EBS-backed as opposed to instance-store:
+
 
+
  $> euca-describe-images
+
 
+
Note that an additional snapshot now exists to back the image:
+
 
+
  $> nova volume-snapshot-list
+
  
Also note for each of the two volume-backed instances you've fired up, there is a volume cloned from the corresponding snapshot:
+
Also note that for the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:
  
  $> nova volume-list
+
$> cinder list
  
 
}}
 
}}

Revision as of 12:07, 26 October 2012

Contents

Description

Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.

We construct a bootable volume, then fire up an instance backed by this volume.

Setup

We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.

We also need a rootfs-style image, which may be download from:

$> wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz

Finally, we assume that the nova-volume service or cinder is enabled and running.

How to test

Create a 1Gb volume, which we will make bootable:

$> cinder create --display_name=bootable_volume 1
$> VOLUME_ID=$(cinder list | awk '/bootable_volume/ {print $2}')

and wait for the volume to become available:

$> watch "cinder show bootable_volume | grep status"

Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume

$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb

Wait for the volume status to show as in-use:

$> watch "cinder show bootable_volume | grep status"

Format and mount volume to a staging mount point:

$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
set -o errexit
set -o xtrace
sudo mkdir -p /tmp/stage
sudo mkfs.ext3 -b 1024 /dev/vdb 1048576
sudo mount /dev/vdb /tmp/stage
sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
EOF

Copy image to the staging directory on the builder instance:

$> scp -o StrictHostKeyChecking=no -i nova_key.priv cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage

Unpack image into the volume (don't worry about an unmount failure).

$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
set -o errexit
set -o xtrace
cd /tmp/stage
sudo mkdir -p /tmp/image
sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz
sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image
sudo cp -pr /tmp/image/* /tmp/stage/
cd
sync
sudo umount /tmp/image
sudo umount /tmp/stage || true
EOF

Detach volume for the builder instance:

$> nova volume-detach $INSTANCE $VOLUME_ID

and wait for the volume status to show as availble:

$> watch "cinder show bootable_volume | grep status"

Now snapshot the bootable volume we just created:

$> cinder snapshot-create --display_name bootable_snapshot $VOLUME_ID

and wait for the snapshot to become available:

$> watch "cinder snapshot-show bootable_snapshot"
$> SNAPSHOT_ID=$(cinder snapshot-list | awk '/bootable_snapshot/ {print $2}')

Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.

$> IMAGE_ID=$(glance image-list | grep $(nova show $INSTANCE | awk '/image/ {print $4}') | awk '{print $2}')
$> nova boot --flavor 1 --image $IMAGE_ID --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed

Expected Results

You should be able able to ssh into the volume-backed instance.

Also note that for the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:

$> cinder list