From Fedora Project Wiki

(add docs i18n case)
(completely revamp page)
(17 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Note -- there was a meeting held describing [http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.html the current state of the cloud on March 25, 2014].  The logs of that are more up to date than this wiki page.
= Background =
= Background =


Fedora Infrastructure is looking to setup a private cloud instance in 2012. This cloud instance will be used in a number of ways to benefit Fedora. We are continuing to evaluate a number of cloud technologies for the software side of this cloud.
Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack icehouse. We are in the process of migrating to the new icehouse based cloud now.  


= Two Cloudlets =
= History =


Our original setup was going to be a single eucalyptus cloud instance. However, when testing deployment, we determined it would be better to split our resources into 2 clouds. This will allow us to do things like upgrade or re-install one cloud while the other is running. Resources could be redirected/rebooted into the other cloud to allow one to be in downtime.  
In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times. Finally a primary cloudlet was established with openstack folsom. In 2014 and 2015 we setup a new cloud using ansible playbooks to do a repeatable and maintainable setup.  


= Software =  
= Two Cloudlets =


When we evaluated software early in 2012, eucalyptus was the clear leader. However, later in 2012 things are not as clear, so we are investigating other cloud software to determine which we wish to go with. We may well decide on one for one cloudlet and another for the other one, depending on ongoing setup and maint costs.
We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.  


= Use cases =  
= Current setup =


== Doesn't need persistent storage ==
Current setup (as of 2014-03-25) is described in #fedora-classroom:
http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html


* Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed. It's unknown how many instances we would need here.
old cloudlet (folsom, being migrated away from):


* Chainbuilding / Kopers may use this cloud to build chains of packages that are not yet in Fedora and thus cannot be build via scratch builds in the existing buildsystem. These may also be used for spinning test live or install images by QA. This may be open to Fedora contributors or restricted to a subset such as packagers.  
* fed-cloud01, fed-cloud03, fed-cloud04, fed-cloud05, fed-cloud06, fed-cloud07, fed-cloud08 are all compute nodes in this cloud.
* fed-cloud02 is the main controller node.  


* Mass rebuilds of Fedora packages. This could be done for testing a new global rpm/package change, or to discover FTBFS (Fails to build from source) packages. This would use as many builders as we could easily spin up to reduce time for building all 10,000+ Fedora packages. Could use the chainbuilding setup as above as a scaffolding. Additionally, extra builder instances could be potentially used by the official build system during mass rebuilds to reduce rebuild time.
new cloudlet (icehouse, being migrated to):


* Docs folks need to generate i18n versions of docs. This would require an instance, tools and a script running. Then data is synced off and the instance could be destroyed.  
* fed-cloud09 is the main controller node
* fed-cloud10,11,12,13,14,15 are compute nodes.  


== Needs persistent storage, but possibly can use a /mnt ed volume ==
= Setup / deployment =


* Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR. We currently have several publictest instances.  
This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.  


== Needs persistent storage and snapshots ==
We have 15 physical servers total. Currently 8 of them are in the 'production/old/folsom' cloudlet, and 7 are in the new icehouse cloudlet. As we migrate we will move more nodes to the icehouse cloudlet. 


* Infrastructure Development hosts may be moved to this cloud. These instances could possibly be 'on demand' when development needs to take place. Currently we have about 8 development instances.
= Policies =


* Infrastructure Staging hosted may be moved to this cloud. Some of these may be 'always on' and some may be on demand. Currently we have about 13 of these instances.  
Users or groups that need rare one off images can simply request one via a infrastructure ticket.  


* We may want to move some of our one-off instances that are outside phx2 into the cloud for easier management. Things like keyservers, unbound instances, listservers or hosted resources.  
Users or groups that often need instances may be granted accounts to spin up and down their own images.  


Further down the road:
Instances may be rebooted at any time. Save your data off often.


* Instances for qa/packagers to test new packages or track down bugs.  
Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.  


* Instances for demos or events to show off Fedora.  
Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.  


For initial deployment, we would need to be able to run ~30 or so instances at a time with ability to grow rapidly above that for qa and building needs.
We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.
 
= Dependencies =


* Need a way to easily provision new instances with limited admin intervention. Looking at ansible for this task.
= Images =


* Would like to be able to create images via kickstart and normal install/deployment methods if needed.  
We will provide fedora and centos and rhel images.  


* Hardware needs to be ordered and installed.  
If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.  


* Public IP addresses need to be made available.
= Major users =


* Would be nice to get full EPEL packages to deploy with.  
* The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.  


= Setup / deployment =
* jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.


This hardware will be on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This will allow us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers for caching with additional netapp space for images and data.
* Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.  


We have 8 physical servers for this deployment. 4 will be in each 'cloudlet'. One node will be a controller node with access to external IP's and the other 3 will be compute nodes.  
* The twisted project runs some buildbot tests.  


= Implementation overview / timelines =
= hardware access =  


<strike>2012-04 - Hardware is being determined and finalized. </strike>
ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).  


<strike>2012-07 - Initial hardware setup and install</strike>
= maint windows =


<strike>2012-08 - Initial use cases gathered</strike>
With the move to the new icehouse cloud, we will be reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.


2012-09 - Finish software evaluations, setup 'production' instances.
= Contact / more info =


2012-10 - Announce availability and collect more use cases.
Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.
 
2012-11 - Evaluate load and expansion needs.
 
= Policies =
 
This section is currently under discussion. We need to setup clear policies on usage and access to the private cloud. In general we plan to open things to a small group of trusted contributors, take their feedback and usage and expand access out to larger groups as capacity and desire allows.
 
(This section is a DRAFT)
 
Users or groups that need rare one off images can simply request one via a ticket. Users or groups that often need instances will be granted accounts to spin up and down their own images.
 
Instances may be rebooted at any time. Save your data off often.
 
Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.
 
Default instance time to live would be a week.
 
Default network policy will allow only ports 80, 443, and 22 tcp.
 
Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.
 
We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.

Revision as of 16:50, 13 May 2015

Note -- there was a meeting held describing the current state of the cloud on March 25, 2014. The logs of that are more up to date than this wiki page.

Background

Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack icehouse. We are in the process of migrating to the new icehouse based cloud now.

History

In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times. Finally a primary cloudlet was established with openstack folsom. In 2014 and 2015 we setup a new cloud using ansible playbooks to do a repeatable and maintainable setup.

Two Cloudlets

We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.

Current setup

Current setup (as of 2014-03-25) is described in #fedora-classroom: http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html

old cloudlet (folsom, being migrated away from):

  • fed-cloud01, fed-cloud03, fed-cloud04, fed-cloud05, fed-cloud06, fed-cloud07, fed-cloud08 are all compute nodes in this cloud.
  • fed-cloud02 is the main controller node.

new cloudlet (icehouse, being migrated to):

  • fed-cloud09 is the main controller node
  • fed-cloud10,11,12,13,14,15 are compute nodes.

Setup / deployment

This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.

We have 15 physical servers total. Currently 8 of them are in the 'production/old/folsom' cloudlet, and 7 are in the new icehouse cloudlet. As we migrate we will move more nodes to the icehouse cloudlet.

Policies

Users or groups that need rare one off images can simply request one via a infrastructure ticket.

Users or groups that often need instances may be granted accounts to spin up and down their own images.

Instances may be rebooted at any time. Save your data off often.

Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.

Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.

We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.

Images

We will provide fedora and centos and rhel images.

If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.

Major users

  • The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.
  • jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.
  • Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.
  • The twisted project runs some buildbot tests.

hardware access

ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).

maint windows

With the move to the new icehouse cloud, we will be reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.

Contact / more info

Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.