XavierLamien/Infrastructure/FedoraCloud

From FedoraProject

< XavierLamien(Difference between revisions)
Jump to: navigation, search
(Added repo details)
(Update infos)
 
Line 10: Line 10:
  
  
=== technologies we'll use ===
+
=== Technologies we'll use ===
  
 
* Virtualization : KVM
 
* Virtualization : KVM
Line 18: Line 18:
 
* OS provisioning : Cobbler
 
* OS provisioning : Cobbler
  
=== Cloud instance ===
+
=== Cloud instances ===
 
Currently, the cloud instance is running under a ovirt-appliance provided by the ovirt team.<BR>
 
Currently, the cloud instance is running under a ovirt-appliance provided by the ovirt team.<BR>
 
We need to know how we will deploy the cloud instance in fedora infrastructure.<BR>
 
We need to know how we will deploy the cloud instance in fedora infrastructure.<BR>
  
 
* Use configured ovirt-appliance (just like the test instance)
 
* Use configured ovirt-appliance (just like the test instance)
* Deploy and manage all apps that ovirt uses (cobbler, collectd, db, etc)  
+
** All oVIRT services all-in-one box.
 
+
<BR>
 +
* Deploy and manage all apps that ovirt uses (cobbler, collectd, db, etc)
 +
** 1 box for oVIRT UI (physical or virtual). Load balanced.
 +
** 1 box for Cobbler (physical or virtual).
 +
** 1 box for PXE boot (virtual would be good enough). We can think to mix it with Cobbler box.
 +
** 1 box for Database (dedicated to prevent overload issue).
 +
** x box(es) for oVIRT node(s) (dedicated as described below).
 +
<BR>
 
i'm for the second choice.
 
i'm for the second choice.
  
Line 47: Line 54:
 
Redhat<BR>
 
Redhat<BR>
 
* [http://download.tuxfamily.org/lxtnow/redhat/5/i386 rhel_i386]
 
* [http://download.tuxfamily.org/lxtnow/redhat/5/i386 rhel_i386]
* rehel_x86_64 (not yet)<BR>
+
* rhel_x86_64 (not yet)<BR>
 
<BR>
 
<BR>
 
Fedora<BR>
 
Fedora<BR>
* fc10 (not yet) :: Ovirt repo can be use for at this time.
+
* fc10 (not yet) :: Ovirt repo can be use at this time.
* fc11 (not yet) :: Ovirt repo should do the trick as well.
+
* fc11 (not yet) :: Ovirt repo should do the trick as well (tested).
  
 
== oVIRT ==
 
== oVIRT ==
Line 97: Line 104:
 
<BR>[to edit]
 
<BR>[to edit]
  
==== Hosts management (ovirt nodes) ====
+
==== Hosts management (part of ovirt nodes) ====
  
 
Ovirt is able to manage different hosts from different places but can't be shared between hardware pools.<BR>
 
Ovirt is able to manage different hosts from different places but can't be shared between hardware pools.<BR>
 
Hosts informations are indexes and stored in its database (table Host).<BR>
 
Hosts informations are indexes and stored in its database (table Host).<BR>
 
From all i know, you cannot register any hosts from the web interface.<BR>
 
From all i know, you cannot register any hosts from the web interface.<BR>
You only able to add hosts from available registered hosts to [hardware|virtual|smart] pools or anythings else you would like ot dream about.
+
You only able to add hosts from available registered hosts to [hardware|virtual|smart] pools or anythings else you would like to dream about.
  
 
* How to register hosts
 
* How to register hosts
Line 148: Line 155:
  
 
==== Pools management ====
 
==== Pools management ====
[to edit]
+
We will have different pools to dissociate the usage.<BR>
 +
So, a pool to handle all redhat VM, one for fedora people, another for specific usage and so on.<BR>
  
  
 
==== Smart Pools ====
 
==== Smart Pools ====
I don't need to talk about what is a pool, i mean for now.
+
It's just bookmark-like to manage things easier.
We will need to have different pools to dissociate the usage.<BR>
+
Any oVIRT can create one and have a shortcut menu at the bottom left of the page.
So, a pool to handle all redhat VM, one for fedora people, another for specific user and so on.<BR>
+
 
+
* Current ETA:
+
I just start create smart pool. If you see the same pool more than once don't worry about that.<BR>
+
it's just lynks which screwed it up when i did my tries.<BR>
+
 
+
I'll need a good web access to go forward, lynx is very limited to work on ovirt webUI.
+
  
 
== Cobbler ==
 
== Cobbler ==
 
Cobbler is the way where ovirt handles OS provisioning and profile management.
 
Cobbler is the way where ovirt handles OS provisioning and profile management.
 
We'll need to prevent from people question such as :<BR>
 
We'll need to prevent from people question such as :<BR>
Could we request a specific profile for our vm  or it's just up to us ?<BR>
+
Could we request a specific profile for our vm  or is it just up to us ?<BR>
  
 
==== Authentication ====
 
==== Authentication ====

Latest revision as of 14:14, 15 March 2009

<DRAFT>

Contents

[edit] Fedora Cloud

This page is has been setup to track down what is actually going on our fedora cloud test instance which is running on xen6 box for now. It will also a start page about what we need to do, what work need to be done.


[edit] Overview

[edit] Technologies we'll use

  • Virtualization : KVM
  • Platform management : Ovirt (http://ovirt.org)
  • Storage : ISCSI share
  • Disk management : LVM
  • OS provisioning : Cobbler

[edit] Cloud instances

Currently, the cloud instance is running under a ovirt-appliance provided by the ovirt team.
We need to know how we will deploy the cloud instance in fedora infrastructure.

  • Use configured ovirt-appliance (just like the test instance)
    • All oVIRT services all-in-one box.


  • Deploy and manage all apps that ovirt uses (cobbler, collectd, db, etc)
    • 1 box for oVIRT UI (physical or virtual). Load balanced.
    • 1 box for Cobbler (physical or virtual).
    • 1 box for PXE boot (virtual would be good enough). We can think to mix it with Cobbler box.
    • 1 box for Database (dedicated to prevent overload issue).
    • x box(es) for oVIRT node(s) (dedicated as described below).


i'm for the second choice.

[edit] Hardware

The nodes will be diskless 16G RAM, 2X Quad Core 1U x3550's. The storage nodes will be 2U boxes of a similar type.


[edit] Network

For the initial rollout there will be two networks.

First network will be connected via an external IP pool space of about 80 IP's (No eta on this yet).

The second network will be a combined storage and management network. It'll be in private IP space (10.something). This is where the storage network will be as well as overall management. We only have one switch for the initial rollout, vlan will take care of the rest.

For future rollouts we may add an additional switch, use multipath, etc.


[edit] Temporary repo

Redhat


Fedora

  • fc10 (not yet) :: Ovirt repo can be use at this time.
  • fc11 (not yet) :: Ovirt repo should do the trick as well (tested).

[edit] oVIRT

[edit] Puppet Files

You will find below all files which need to be puppet managed.

# In /puppet/config/cloud
/etc/ovirt-server/database.yml
/etc/ovirt-server/db
/etc/ovirt-server/development.rb
/etc/ovirt-server/production.rb
/etc/ovirt-server/test.rb
/etc/sysconfig/ovirt-mongrel-rails
/etc/sysconfig/ovirt-rails
# In /puppet/config/web /etc/httpd/conf.d/ovirt-server.conf

[more]


[edit] Authentication (web access)

The default authentication for ovirt is handles by krb5 through LDAP database (it's also includes a IPA instance). It's not the way we want to follow.
Fedora people should be able to log in through their FAS account (or trusted CA ?).
Now i'm wondering if we'll follow that way which will imply to hack a bit ovirt (e.g don't let fedora people see Redhat pool through the web interface) or just handle a trac instance where they could request a virtual machine or again, by apply to a new-cloud-specific group with additional tools which will handle vms.

  • Current ETA:

I currently setup the web interface to deals with Apache Authentication.
The password file is stored in /srv

[edit] User and Permissions management

Ovirt fetch only registered LDAP users in the pop-up window.
As we use FAS2, we'll make ovirt to deal with FAS account to have something like this through the webUI.

Here is a capture of the result after hacked the code a bit :
Ovirt-users permissions with fas2.png

Now, users in that list can only be approved or administrator members from the sysadmin-cloud group (which has been created for the cloud instance).
As said above, the web access could be forbidden for fedora standard people.

[to edit]

[edit] Hosts management (part of ovirt nodes)

Ovirt is able to manage different hosts from different places but can't be shared between hardware pools.
Hosts informations are indexes and stored in its database (table Host).
From all i know, you cannot register any hosts from the web interface.
You only able to add hosts from available registered hosts to [hardware|virtual|smart] pools or anythings else you would like to dream about.

  • How to register hosts

Packages requirement:

ovirt-node 
ovirt-node-selinux 
ovirt-node-statefull
libvirt-qpid   ## libvirt connector for Qpid and QMF which are used by ovirt-server
collectd-virt  ## libvirt plugin for collectd


Auth requirement:
You need to register your hosts by generating the authentication file via ovirt-add-host shell-script. Then load this file to authenticate your host.
This part should be move out or bind to generated CA.

Puppet files:

/etc/collecd.conf            ## from where you will add ovirt-server-side and libvirtd infos
/etc/libvirt/qemu.conf
/etc/sasl2/libvirt.conf
/etc/sysconfig/libvirt-qpid  ## from where you'll add ovirt-server side information (host + port num)


iptables issues:
Default Port num which will need to be open

7777:tcp        ## for ovirt-listen-awake daemon (tells to ovirt-server that the node is available) 
16509:tcp       ## for libvirtd daemon
5900-6000:tcp   ## vnc connection from client-side
49152-49216:tcp ## for libvirt migration


Host-side registration (/var/log/ovirt.log)

Starting wakeup conversation.
Retrieving keytab: 'http://management.priv.ovirt.org/ipa/config/192.168.50.3-libvirt.tab'
Disconnecting.
Sending oVirt Node details to server.
Finished!

[edit] Storage management

For its initial rollout storage will be done on two servers each with 4T of usable space. One will be master and export its data via iscsi to the nodes. The other will be a secondary storage (replicated to via iscsi and software raid). This is not an HA setup but is a data redundancy setup so if one fails, we won't lose any data.


[edit] Pools management

We will have different pools to dissociate the usage.
So, a pool to handle all redhat VM, one for fedora people, another for specific usage and so on.


[edit] Smart Pools

It's just bookmark-like to manage things easier. Any oVIRT can create one and have a shortcut menu at the bottom left of the page.

[edit] Cobbler

Cobbler is the way where ovirt handles OS provisioning and profile management. We'll need to prevent from people question such as :
Could we request a specific profile for our vm or is it just up to us ?

[edit] Authentication

We'll also need to bind it to FAS

</DRAFT>