From Fedora Project Wiki

Revision as of 08:21, 19 November 2012 by Gkotton (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Description

Deploy the Quantum virtual network service and configure Nova to use QuantumManager as its NetworkManager. Quantum includes several plugins. The openvswitch plugin is covered here.

Setup

The installation of the Quantum packages is done the openstack-demo-install utility. The examples below show the usage of OpenvSwitch as the plugin and agent. This can be replaced by any one of the other supported agents, for example linux bridge.

Note: If you have sourced keystone.rc then please make sure that you set the following environment variables. These are used by the Quantum setup scripts and match the configured keystone user and password.

OS_USERNAME=quantum
OS_PASSWORD=servicepass
OS_AUTH_URL=http://127.0.0.1:35357/v2.0/
OS_TENANT_NAME=service

Configure a Quantum user with Keystone.

get_id () { echo $("$@" | grep ' id ' | awk '{print $4}'); }
ADMIN_PASSWORD=$OS_PASSWORD
SERVICE_HOST=127.0.0.1
SERVICE_PASSWORD=$OS_PASSWORD
SERVICE_TENANT=$(keystone tenant-list | grep service | awk '{print $2}')
ADMIN_ROLE=$(keystone role-list | grep ' admin ' | awk '{print $2}')
QUANTUM_USER=$(get_id keystone user-create --name=quantum --pass="$SERVICE_PASSWORD" --tenant-id $SERVICE_TENANT --email=quantum@example.com)
keystone user-role-add --tenant-id $SERVICE_TENANT --user-id $QUANTUM_USER --role-id $ADMIN_ROLE
QUANTUM_SERVICE=$(get_id keystone service-create --name=quantum --type=network --description="Quantum Service")
keystone endpoint-create --region RegionOne --service_id $QUANTUM_SERVICE --publicurl http://localhost:9696 --adminurl http://localhost:9696 --internalurl http://localhost:9696

Install the openvswitch package. This will also install the openvswitch package.

$> sudo yum install -y openstack-quantum-openvswitch

Enable and start the openvswitch service

$> sudo systemctl enable openvswitch.service
$> sudo systemctl start openvswitch.service

Create the integration bridges (these are used for VM traffic management and layer 3 external networking)

$> sudo ovs-vsctl add-br br-int
$> sudo ovs-vsctl add-br br-ex

Configure quantum-server to use the openvswitch. Please note that this will create a database:

$> sudo quantum-server-setup --plugin openvswitch

Please check that the following appear in /etc/nova/nova.conf under the DEFAULT section:

network_api_class = nova.network.quantumv2.api.API
quantum_admin_username = quantum
quantum_admin_password = servicepass
quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_url = http://localhost:9696/

Enable and start the quantum-server:

$> sudo systemctl enable quantum-server.service
$> sudo systemctl start quantum-server.service

Restart the nova compute (quantum updates drivers in the nova.conf configuration file)

$> sudo systemctl restart openstack-nova-compute.service

Enable and start the layer 2 agent:

$> sudo systemctl enable quantum-openvswitch-agent.service
$> sudo systemctl start quantum-openvswitch-agent.service

Configure the DHCP service (set the hostname as localhost):

$> sudo quantum-dhcp-setup --plugin openvswitch

Enable and start the DHCP agent:

$> sudo systemctl enable quantum-dhcp-agent.service
$> sudo systemctl start quantum-dhcp-agent.service

Configure the L3 service:

$> sudo quantum-l3-setup --plugin openvswitch

Please check that the following are in /etc/quantum/l3_agent.ini:

auth_url = http://localhost:35357/v2.0/
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = servicepass

Enable and start the L3 agent:

$> sudo systemctl enable quantum-l3-agent.service
$> sudo systemctl start quantum-l3-agent.service

quantum now should be up and ready to go. Check the log files to see if there are any error:

  1. Service - /var/log/quantum/server.log
  2. OpenvSwitch agent - /var/log/quantum/openvswitch-agent.log
  3. DHCP agent - /var/log/quantum/dhcp-agent.log
  4. L3 agent - /var/log/quantum/l3-agent.log

How to test

Configuration

Ensure that environment varibales are configured to run the client

$> source keystonerc

Get the keystone tenant-id for the demo tenant

$> keystone tenant-list

This ID will be used for the service tenant.

Create network and subnet

$> quantum net-create --tenant-id <DEMO_TENANT_ID> net1
$> quantum subnet-create --tenant-id <DEMO_TENANT_ID> net1 10.0.0.0/24

Create a router, and add the private subnet as one of its interface

$> quantum router-create --tenant_id <DEMO_TENANT_ID> router1
$> quantum router-interface-add <ROUTER_ID> <SUBNET_ID>

Create an external network, and a subnet. Note that this is on a different tenant, the service tenant, and the DHCP is disabled for the subnet.

$> quantum net-create --tenant-id <SERVICE_TENANT_ID> ext_net -- --router:external=True
$> quantum subnet-create  --tenant-id <SERVICE_TENANT_ID> ext_net 172.24.4.224/28 -- --enable_dhcp=False
$> quantum router-gateway-set <ROUTER_ID> <EXTERNAL_NETWORK_ID>

Get the external gateway IP

$> quantum subnet-show <EXTERNAL_SUBNET_ID>

Update the gateway IP for the external bridge (using the gateway subnet)

$> sudo ip addr add 172.24.4.225/28 dev br-ex
$> sudo ip link set br-ex up

Connectivity between VM's. Boot a VM. Note when booting a VM the demo tenant must be used.

$> export OS_USERNAME=demo
$> export OS_PASSWORD=verybadpass
$> export OS_TENANT_NAME=demo
$> export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/
$> nova image-list
$> nova boot --image <name> --flavor 1 <name>

Assign a floating IP to a VM

$> quantum floatingip-create ext_net
$> quantum floatingip-associate <floating ip id> <portid>

Expected Results

  1. After the subnet create the DHCP server should allocate a port to allocate IP addresses to the VMs running on the network. Run quantum port-list to check that a port has been created.
  2. Check that a namespace has been created for the DHCP agent. Run ip netns. A namespace should be created with the prefix dhcp. This can be used to ping VM's deployed on the network.
  3. The L2 agent will add the port to the integration bridge. Run sudo ovs-vsctl show. This will show that a tap device for the DHCP agent has been added to the integration bridge. The tap device name will be the first 11 bytes of the port ID.
  4. After the router createion a tap device for the router will be attached to the integration bridge. This will have the prefix 'qr-'.
  5. The DHCP IP address will be 10.0.0.1 and the Router address will be 10.0.0.2
  6. Check that a namespace has been created for the router agent. Run ip netns. A namespace should be created with the prefix qrouter. This can be used to ping VM's deployed on the network.
  7. When the router gateway is set a tap device is create on the external bridge. This has the prefix 'qg'.
  8. VM's that are started should receive and IP address on the network 10.0.0.0/24. In addition to this if there is more than one VM on the network the VM's should be able to ping one another.
  9. VMs with floating IP's are accessible via those IP addresses.