From Fedora Project Wiki

(Initial version)
 
(completely revamp page)
(27 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Note -- there was a meeting held describing [http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.html the current state of the cloud on March 25, 2014].  The logs of that are more up to date than this wiki page.
= Background =
= Background =


Fedora Infrastructure is looking to setup a private eucalyptus cloud instance in 2012. This cloud instance will be used in a number of ways to benefit Fedora. We evaluated a number of cloud technologies and decided (at least for now) on eucalyptus as the best fit for our needs.  
Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack icehouse. We are in the process of migrating to the new icehouse based cloud now.
 
= History =
 
In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times. Finally a primary cloudlet was established with openstack folsom. In 2014 and 2015 we setup a new cloud using ansible playbooks to do a repeatable and maintainable setup.
 
= Two Cloudlets =
 
We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.
 
= Current setup =
 
Current setup (as of 2014-03-25) is described in #fedora-classroom:
http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html
 
old cloudlet (folsom, being migrated away from):
 
* fed-cloud01, fed-cloud03, fed-cloud04, fed-cloud05, fed-cloud06, fed-cloud07, fed-cloud08 are all compute nodes in this cloud.
* fed-cloud02 is the main controller node.
 
new cloudlet (icehouse, being migrated to):
 
* fed-cloud09 is the main controller node
* fed-cloud10,11,12,13,14,15 are compute nodes.
 
= Setup / deployment =
 
This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.  
 
We have 15 physical servers total. Currently 8 of them are in the 'production/old/folsom' cloudlet, and 7 are in the new icehouse cloudlet. As we migrate we will move more nodes to the icehouse cloudlet. 
 
= Policies =
 
Users or groups that need rare one off images can simply request one via a infrastructure ticket.  


= Why Eucalyptus =
Users or groups that often need instances may be granted accounts to spin up and down their own images.


* Open Source
Instances may be rebooted at any time. Save your data off often.


* Active Community
Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.


* Deployable now
Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.


* Instances can be VLAN private so they cannot interfere with each other.  
We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.


= Use cases =  
= Images =


* Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed.  
We will provide fedora and centos and rhel images.  


* Infrastructure Development hosts may be moved to this cloud. These instances could possibly be 'on demand' when development needs to take place.  
If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.  


* Infrastructure Staging hosted may be moved to this cloud. Some of these may be 'always on' and some may be on demand.
= Major users =


* Chainbuilding / Kopers may use this cloud to build chains of packages that are not yet in Fedora and thus cannot be build via scratch builds in the existing buildsystem. This may be open to Fedora contributors or restricted to a subset such as packagers.  
* The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.  


* Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR.  
* jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.  


= Dependencies =
* Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.


* Need a way to easily provision new instances with limited admin intervention. Looking at ansible for this task.  
* The twisted project runs some buildbot tests.  


* Would like to be able to create images via kickstart and normal install/deployment methods if needed.
= hardware access =


* Hardware needs to be ordered and installed.  
ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).  


* Public IP addresses need to be made available.
= maint windows =


* Would be nice to get full EPEL packages to deploy with.  
With the move to the new icehouse cloud, we will be reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.  


= Implementation overview / timelines =
= Contact / more info =


2012-04 - Hardware is being determined and finalized.
Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.
2012-05 - Initial hardware setup and install
2012-06 - Initial use cases setup and tested
2012-07 - Announce availability and collect more use cases.
2012-08 - Evaluate load and expansion needs.

Revision as of 16:50, 13 May 2015

Note -- there was a meeting held describing the current state of the cloud on March 25, 2014. The logs of that are more up to date than this wiki page.

Background

Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack icehouse. We are in the process of migrating to the new icehouse based cloud now.

History

In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times. Finally a primary cloudlet was established with openstack folsom. In 2014 and 2015 we setup a new cloud using ansible playbooks to do a repeatable and maintainable setup.

Two Cloudlets

We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.

Current setup

Current setup (as of 2014-03-25) is described in #fedora-classroom: http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html

old cloudlet (folsom, being migrated away from):

  • fed-cloud01, fed-cloud03, fed-cloud04, fed-cloud05, fed-cloud06, fed-cloud07, fed-cloud08 are all compute nodes in this cloud.
  • fed-cloud02 is the main controller node.

new cloudlet (icehouse, being migrated to):

  • fed-cloud09 is the main controller node
  • fed-cloud10,11,12,13,14,15 are compute nodes.

Setup / deployment

This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.

We have 15 physical servers total. Currently 8 of them are in the 'production/old/folsom' cloudlet, and 7 are in the new icehouse cloudlet. As we migrate we will move more nodes to the icehouse cloudlet.

Policies

Users or groups that need rare one off images can simply request one via a infrastructure ticket.

Users or groups that often need instances may be granted accounts to spin up and down their own images.

Instances may be rebooted at any time. Save your data off often.

Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.

Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.

We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.

Images

We will provide fedora and centos and rhel images.

If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.

Major users

  • The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.
  • jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.
  • Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.
  • The twisted project runs some buildbot tests.

hardware access

ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).

maint windows

With the move to the new icehouse cloud, we will be reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.

Contact / more info

Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.