From Fedora Project Wiki

(Created page with "{{QA/Test_Case |description=A brief description of the functionality being tested. |setup=Optionally include information on preparing the test environment |actions= Be as specifi...")
 
m (Fixed typo which stumped me for quite some time auth_straGegy to auth_strategy)
 
(23 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{QA/Test_Case
{{QA/Test_Case
|description=A brief description of the functionality being tested.
|description=
|setup=Optionally include information on preparing the test environment
 
Everything so far has been done on a single node.
Here we add another (virtual) node for running VMs.
 
Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.
 
|setup=
 
Open the Qpid, MySQL, iSCSI and Nova API ports on controller:
 
$ controller> sudo lokkit -p 5672:tcp
$ controller> sudo lokkit -p 3306:tcp
$ controller> sudo lokkit -p 3260:tcp
$ controller> sudo lokkit -p 9292:tcp
$ controller> sudo service libvirtd reload
 
Configure the network with a physical bridge interface:
 
$ controller> sudo systemctl stop openstack-nova-network.service
$ controller> sudo ip link set testnetbr0 down
$ controller> sudo brctl delbr testnetbr0
$ controller> sudo kill -9 $(cat /var/lib/nova/networks/nova-testnetbr0.pid)
$ controller> mysql -unova -pnova nova -e 'update networks set bridge_interface="em1" where label="testnet"'
$ controller> sudo systemctl start openstack-nova-network.service
 
Make sure that ntp is enabled on both machines:
 
$> sudo yum install -y ntp
$> sudo systemctl start ntpd.service
$> sudo systemctl enable ntpd.service
 
On the compute node, follow the "Configure sudo", "Update your machine", "Enable the Qpid Broker", "Enable libvirt", "Optionally Load nbd" and "Put SELinux into Permissive Mode" instructions from [[Test_Day:2012-03-08_OpenStack_Test_Day|the main page]].
 
Configure nova so that node can access services on controller:
 
$ node> echo '1.2.3.4 controller' <nowiki>| sudo tee -a /etc/hosts</nowiki>
$ node> sudo yum install -y --enablerepo=updates-testing openstack-nova openstack-keystone
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 1.2.3.4
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
 
Enable the compute service:
 
$ node> sudo systemctl start openstack-nova-compute.service
$ node> sudo systemctl enable openstack-nova-compute.service
 
|actions=
|actions=
Be as specific as required for the target audience.  
Now when the controller launches instances (see: [[QA:Testcase_launch_an_instance_on_OpenStack]]),
# Start here ...
they're started either on the controller or node.
# Next do this ...
 
# Finally click that
You should also try [[QA:Testcase_attach_a_volume_to_an_instance]] and [[QA:Testcase_OpenStack_floating_IPs]] with instances scheduled on the compute node.
 
|results=
|results=
The following must be true to consider this a successful test run. Be brief ... but explicit.
Verify where instances are running with euca-describe-instances
# Step #1 completes without error
# The system boots into runlevel 5
# Program completes wth exit code 0
}}
}}


[[Category:OpenStack Test Cases]]
[[Category:OpenStack Test Cases]]
[[Category:Cloud SIG]]

Latest revision as of 18:37, 16 November 2012

Description

Everything so far has been done on a single node. Here we add another (virtual) node for running VMs.

Let's assume the machine you've set up above is called 'controller' and the new machine is called 'node'.

Setup

Open the Qpid, MySQL, iSCSI and Nova API ports on controller:

$ controller> sudo lokkit -p 5672:tcp
$ controller> sudo lokkit -p 3306:tcp
$ controller> sudo lokkit -p 3260:tcp
$ controller> sudo lokkit -p 9292:tcp
$ controller> sudo service libvirtd reload

Configure the network with a physical bridge interface:

$ controller> sudo systemctl stop openstack-nova-network.service
$ controller> sudo ip link set testnetbr0 down
$ controller> sudo brctl delbr testnetbr0
$ controller> sudo kill -9 $(cat /var/lib/nova/networks/nova-testnetbr0.pid)
$ controller> mysql -unova -pnova nova -e 'update networks set bridge_interface="em1" where label="testnet"'
$ controller> sudo systemctl start openstack-nova-network.service

Make sure that ntp is enabled on both machines:

$> sudo yum install -y ntp
$> sudo systemctl start ntpd.service
$> sudo systemctl enable ntpd.service

On the compute node, follow the "Configure sudo", "Update your machine", "Enable the Qpid Broker", "Enable libvirt", "Optionally Load nbd" and "Put SELinux into Permissive Mode" instructions from the main page.

Configure nova so that node can access services on controller:

$ node> echo '1.2.3.4 controller' | sudo tee -a /etc/hosts
$ node> sudo yum install -y --enablerepo=updates-testing openstack-nova openstack-keystone
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@controller/nova
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers controller:9292
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT iscsi_ip_prefix 1.2.3.4
$ node> sudo openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone 

Enable the compute service:

$ node> sudo systemctl start openstack-nova-compute.service
$ node> sudo systemctl enable openstack-nova-compute.service 

How to test

Now when the controller launches instances (see: QA:Testcase_launch_an_instance_on_OpenStack), they're started either on the controller or node.

You should also try QA:Testcase_attach_a_volume_to_an_instance and QA:Testcase_OpenStack_floating_IPs with instances scheduled on the compute node.

Expected Results

Verify where instances are running with euca-describe-instances