From Fedora Project Wiki

No edit summary
m (get cirros-0.3.0-x86_64-rootfs.img.gz from launchpad.net)
(12 intermediate revisions by 2 users not shown)
Line 9: Line 9:
We also need a rootfs-style image, which may be download from:
We also need a rootfs-style image, which may be download from:


  $> wget http://images.ansolabs.com/cirros-0.3.0-x86_64-rootfs.img.gz
  $> wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz


Finally, we assume that the nova-volume service or cinder is enabled and running.
Finally, we assume that the nova-volume service or cinder is enabled and running.
Line 18: Line 18:


  $> cinder create --display_name=bootable_volume 1
  $> cinder create --display_name=bootable_volume 1
  $> <nowiki>VOLUME_ID=$(nova volume-list | awk '/bootable_volume/ {print $2}')</nowiki>
  $> <nowiki>VOLUME_ID=$(cinder list | awk '/bootable_volume/ {print $2}')</nowiki>


and wait for the volume to become available:
and wait for the volume to become available:


  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
  $> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>


Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume
Line 30: Line 30:
Wait for the volume status to show as in-use:
Wait for the volume status to show as in-use:


  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
  $> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>


Format and mount volume to a staging mount point:
Format and mount volume to a staging mount point:


  $> ssh -o StrictHostKeyChecking=no $USER_NAME@$IP_ADDR << EOF
  $> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
  set -o errexit
  set -o errexit
  set -o xtrace
  set -o xtrace
Line 46: Line 46:
Copy image to the staging directory on the builder instance:
Copy image to the staging directory on the builder instance:


  $> scp -o StrictHostKeyChecking=no cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage
  $> scp -o StrictHostKeyChecking=no -i nova_key.priv cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage


Unpack image into the volume (don't worry about an unmount failure).
Unpack image into the volume (don't worry about an unmount failure).


  $> ssh -o StrictHostKeyChecking=no -i $USER_NAME@$IP_ADDR << EOF
  $> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
  set -o errexit
  set -o errexit
  set -o xtrace
  set -o xtrace
Line 70: Line 70:
and wait for the volume status to show as availble:
and wait for the volume status to show as availble:


  $> <nowiki>watch "nova volume-show bootable_volume | grep status"</nowiki>
  $> <nowiki>watch "cinder show bootable_volume | grep status"</nowiki>


Now snapshot the bootable volume we just created:
Now snapshot the bootable volume we just created:


  $> nova volume-snapshot-create --display_name bootable_snapshot $VOLUME_ID
  $> cinder snapshot-create --display_name bootable_snapshot $VOLUME_ID


and wait for the snapshot to become available:
and wait for the snapshot to become available:


  $> nova volume-snapshot-show bootable_snapshot
  $> watch "cinder snapshot-show bootable_snapshot"
  $> <nowiki>SNAPSHOT_ID=$(nova volume-snapshot-list | awk '/bootable_snapshot/ {print $2}')</nowiki>
  $> <nowiki>SNAPSHOT_ID=$(cinder snapshot-list | awk '/bootable_snapshot/ {print $2}')</nowiki>


Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.
Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.


  $> <nowiki>IMAGE_ID=$(nova show $INSTANCE | awk '/image/ {print $5}' | sed 's,(\(.*\)),\1,')</nowiki>
  $> <nowiki>IMAGE_ID=$(glance image-list | grep $(nova show $INSTANCE | awk '/image/ {print $4}') | awk '{print $2}')</nowiki>
  $> nova boot --flavor 1 --image $INSTANCE --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 volume_backed
  $> nova boot --flavor 1 --image $IMAGE_ID --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed


|results=
|results=
Line 90: Line 90:
You should be able able to ssh into the volume-backed instance.
You should be able able to ssh into the volume-backed instance.


Note that an additional snapshot now exists to back the image:
Also note that for the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:


  $> nova volume-snapshot-list
  $> cinder list
 
Also note the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:
 
$> nova volume-list


}}
}}

Revision as of 12:07, 26 October 2012

Description

Nova instances can be booted from volume, analogous to EBS-backed volumes in EC2.

We construct a bootable volume, then fire up an instance backed by this volume.

Setup

We assume that an instance has already been booted in the previous test case, and we use this as a builder to facilitate the creation of a bootable volume.

We also need a rootfs-style image, which may be download from:

$> wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-rootfs.img.gz

Finally, we assume that the nova-volume service or cinder is enabled and running.

How to test

Create a 1Gb volume, which we will make bootable:

$> cinder create --display_name=bootable_volume 1
$> VOLUME_ID=$(cinder list | awk '/bootable_volume/ {print $2}')

and wait for the volume to become available:

$> watch "cinder show bootable_volume | grep status"

Temporarily attach volume to your builder instance, this will allow us to copy image data into the volume

$> nova volume-attach $INSTANCE $VOLUME_ID /dev/vdb

Wait for the volume status to show as in-use:

$> watch "cinder show bootable_volume | grep status"

Format and mount volume to a staging mount point:

$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
set -o errexit
set -o xtrace
sudo mkdir -p /tmp/stage
sudo mkfs.ext3 -b 1024 /dev/vdb 1048576
sudo mount /dev/vdb /tmp/stage
sudo touch /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
sudo chown $USER_NAME /tmp/stage/cirros-0.3.0-x86_64-rootfs.img.gz
EOF

Copy image to the staging directory on the builder instance:

$> scp -o StrictHostKeyChecking=no -i nova_key.priv cirros-0.3.0-x86_64-rootfs.img.gz $USER_NAME@$IP_ADDR:/tmp/stage

Unpack image into the volume (don't worry about an unmount failure).

$> ssh -o StrictHostKeyChecking=no -i nova_key.priv $USER_NAME@$IP_ADDR << EOF
set -o errexit
set -o xtrace
cd /tmp/stage
sudo mkdir -p /tmp/image
sudo gunzip cirros-0.3.0-x86_64-rootfs.img.gz
sudo mount cirros-0.3.0-x86_64-rootfs.img /tmp/image
sudo cp -pr /tmp/image/* /tmp/stage/
cd
sync
sudo umount /tmp/image
sudo umount /tmp/stage || true
EOF

Detach volume for the builder instance:

$> nova volume-detach $INSTANCE $VOLUME_ID

and wait for the volume status to show as availble:

$> watch "cinder show bootable_volume | grep status"

Now snapshot the bootable volume we just created:

$> cinder snapshot-create --display_name bootable_snapshot $VOLUME_ID

and wait for the snapshot to become available:

$> watch "cinder snapshot-show bootable_snapshot"
$> SNAPSHOT_ID=$(cinder snapshot-list | awk '/bootable_snapshot/ {print $2}')

Now we can boot from the bootable volume. We use the same image as the builder instance, but that is only in order to retrieve the image properties.

$> IMAGE_ID=$(glance image-list | grep $(nova show $INSTANCE | awk '/image/ {print $4}') | awk '{print $2}')
$> nova boot --flavor 1 --image $IMAGE_ID --block_device_mapping vda=${SNAPSHOT_ID}:snap::0 --key_name nova_key volume_backed

Expected Results

You should be able able to ssh into the volume-backed instance.

Also note that for the volume-backed instance you've fired up, there is a volume cloned from the corresponding snapshot:

$> cinder list