From Fedora Project Wiki

(Add some draft policies, add some more use cases, note which require persistent data.)
No edit summary
(19 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Note -- there was a meeting held describing [http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.html the current state of the cloud on March 25, 2014].  The logs of that are more up to date than this wiki page.
= Background =
= Background =


Fedora Infrastructure is looking to setup a private eucalyptus cloud instance in 2012. This cloud instance will be used in a number of ways to benefit Fedora. We evaluated a number of cloud technologies and decided (at least for now) on eucalyptus as the best fit for our needs.  
Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack havana.  


= Why Eucalyptus =
= History =


* Open Source
In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times.


* Active Community
= Two Cloudlets =


* Deployable now
We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.


* Instances can be VLAN private so they cannot interfere with each other.  
= Current setup =
 
Current setup (as of 2014-03-25) as described in #fedora-classroom:
http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html


= Use cases =  
= Use cases =  
== Doesn't need persistent storage ==


* Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed. It's unknown how many instances we would need here.  
* Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed. It's unknown how many instances we would need here.  


* Infrastructure Development hosts may be moved to this cloud. These instances could possibly be 'on demand' when development needs to take place. Currently we have about 8 development instances. These may need persistent storage.
* Coprs uses our cloud for a frontend, backend and builders. Builds are submitted to the frontend, the backend processes them and creates builder instances to build and then terminates build instances when complete.
 
* Mass rebuilds of Fedora packages. This could be done for testing a new global rpm/package change, or to discover FTBFS (Fails to build from source) packages. This would use as many builders as we could easily spin up to reduce time for building all 10,000+ Fedora packages. Could use the chainbuilding setup as above as a scaffolding. Additionally, extra builder instances could be potentially used by the official build system during mass rebuilds to reduce rebuild time.  
 
* Docs folks need to generate i18n versions of docs. This would require an instance, tools and a script running. Then data is synced off and the instance could be destroyed.  


* Infrastructure Staging hosted may be moved to this cloud. Some of these may be 'always on' and some may be on demand. Currently we have about 13 of these instances. These may need persistent storage.
== Needs persistent storage, but possibly can use a /mnt ed volume ==


* Chainbuilding / Kopers may use this cloud to build chains of packages that are not yet in Fedora and thus cannot be build via scratch builds in the existing buildsystem. These may also be used for spinning test live or install images by QA. This may be open to Fedora contributors or restricted to a subset such as packagers.  
* Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR.  


* Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR. We currently have several publictest instances.
== Needs persistent storage and snapshots ==


* We may want to move some of our one-off instances that are outside phx2 into the cloud for easier management. Things like keyservers, unbound instances, listservers or hosted resources.  
* Infrastructure Development hosts have been moved to this cloud. These instances could possibly be 'on demand' when development needs to take place. Currently we have about 8 development instances many on cloud. The rest should be migrated soon.  


* Mass rebuilds of Fedora packages. This could be done for testing a new global rpm/package change, or to discover FTBFS (Fails to build from source) packages. This would use as many builders as we could easily spin up to reduce time for building all 10,000+ Fedora packages. Could use the chainbuilding setup as above as a scaffolding. Additionally, extra builder instances could be potentially used by the official build system during mass rebuilds to reduce rebuild time.  
* We may want to move some of our one-off instances that are outside phx2 into the cloud for easier management. Things like keyservers, unbound instances, listservers or hosted resources. This is not yet planned for.


Further down the road:  
Further down the road:  
Line 35: Line 46:
* Instances for demos or events to show off Fedora.  
* Instances for demos or events to show off Fedora.  


For initial deployment, we would need to be able to run ~30 or so instances at a time with ability to grow rapidly above that for qa and building needs.
= Setup / deployment =
 
This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.
 
We have 8 physical servers for this deployment. Currently 6 of them are in the 'production' cloudlet, and 2 are available for testing new deployments.
 
= Policies =
 
We need to setup clear policies on usage and access to the private cloud. In general we plan to open things to a small group of trusted contributors, take their feedback and usage and expand access out to larger groups as capacity and desire allows.
 
Users or groups that need rare one off images can simply request one via a ticket. Users or groups that often need instances will be granted accounts to spin up and down their own images.
 
Instances may be rebooted at any time. Save your data off often.
 
Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.
 
Default network policy will allow only ports 80, 443, and 22 tcp.
 
Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.  


= Dependencies =
We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.


* Need a way to easily provision new instances with limited admin intervention. Looking at ansible for this task.
= Images =


* Would like to be able to create images via kickstart and normal install/deployment methods if needed.
We should customize available images for the above use cases:


* Hardware needs to be ordered and installed.
== All images ==


* Public IP addresses need to be made available.  
Currently for Fedora we are using the produced cloud images. For RHEL we are using a similar minimal instance image.  


* Would be nice to get full EPEL packages to deploy with.
== Infrastructure Dev Instances ==


= Setup / deployment =
Based on rhel6 image.
 
Should contain:
 
mod_wsgi, httpd, git-core, puppet, persistent volume mounted on /srv
 
== QA images ==
 
TBD
 
== Builder Images ==
 
For mockchain/kopers use. Should be limited to 24 hours.
 
= Using ansible with the cloud =
 
TBD: fill in with info on how to make transient or persistent instances via ansible on lockbox01.
 
= Moving to "production" =
 
This section is a checklist of things we need to do before we can consider either of the cloudlets "production". Once we move to production mode on them we will move to scheduling outages, try and keep instances running smoothly and just perform upgrades and maint on the cloudlets. We want to make sure before we do this that things are stable and processes are ready for users.
 
* SOP needs to be written for creating images. (whats in them, update policy, ssh keys policy, etc)
** We can reuse the Fedora cloud images for Fedora
** We still need to determine a 'standard' RHEL6 image.
** ssh keys should be added for root for sysadmin-main and sysadmin-cloud?
 
* <strike>Decide who gets a login to manage instances, and who can just request instances be made for them. </strike>
** Normal use cases have instances created by ansible. If further access is needed it's granted on case by case.
** Down the road we may want to integrate fas somehow, but not now.
 
* <strike>SOP on making an instance for a requestor (via infra ticket?)</strike> yes, via ticket.
** Write an instance-setup script tools to fetch from fas user ssh key based on the given fas login so that user can receive an email once instance is created and log in.
** Do all instance creation in ansible?
 
* Decide on time limits or other resource limits per account/tennat. Setup initial accounts/tenants.
 
* OpenStack cloudlet needs ansible playbooks written to install/configure it.


This hardware will be on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This will allow us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers for caching with additional netapp space for images and data.
* <strike>OpenStack needs folsom testing performed. </stike> folsom installed now.  


= Implementation overview / timelines =
* <strike>OpenStack needs vlan testing performed. </strike> vlans in use.


<strike>2012-04 - Hardware is being determined and finalized. </strike>
* <strike>OpenStack needs non glusterfs testing done. </strike> current install has base fs.


<strike>2012-07 - Initial hardware setup and install</strike>
* <strike>Do we need to decide between euca and openstack? When?</strike> For now we are going to do both.


2012-08 - Initial use cases setup and tested
* We need monitoring added. nagios? Controllers down, nodes down, capacity issues, etc.
** run a nagios persistent image in either cloudlet and monitor the other?


2012-09 - Announce availability and collect more use cases.  
* We need reporting added. Note when instances are made, etc. Either logging to log02, or some seperate report for cloud-sysadmins. Possibly some export from the software about cpu/mem/disk, etc.
** Could possibly be done at ansible creation time or via a gather script
** email from ansible script is done now, still need reporting of non ansible instances.  


2012-10 - Evaluate load and expansion needs.
* <strike>Need to determine who has access to physical cloud machines. Repurpose sysadmin-cloud, setup fas and sudo?</strike>
** if we (and we should) add fas, why not configure it to only create shell account for people who is admin or sponsor from sysadmin-cloud. Which bring us back to 2nd point where approved people from sysadmin-cloud could have access to request instances to be made.
** shell access to the physical cloudlet machines doesn't grant you any access to the cloud software directly.
** sysadmin-cloud will have access to the compute and head nodes, but no special access to the cloud instances.  


= Policies =
* <strike>Setup group that can run ansible against physical cloud machines for updates, etc. (see above question too)</strike>


This section is currently under discussion. We need to setup clear policies on usage and access to the private cloud. In general we plan to open things to a small group of trusted contributors, take their feedback and usage and expand access out to larger groups as capacity and desire allows.
* <strike>Consider a re-occuring maint window for reboots/updates... ie, tell everyone that every month we have a window to do so, save work before then?</strike>
** will just schedule these as needed.


(This section is a DRAFT)
* Figure out how to handle dns. Should we setup some kind of dyndns? Should we just leave it with generic dns? Should we ask for control of reverse dns?
** We have asked for control of reverse dns.


Users or groups that need rare one off images can simply request one via a ticket. Users or groups that often need instances will be granted accounts to spin up and down their own images.  
* How do we back these systems up and what should we actually be backing up.
** At this point I'd say we don't back up and note to users to always back up their data often.


Instances may be rebooted at any time. Save your data off often.
* <strike>Figure out how to make some system be or seem to be persistent</strike> This can be done in ansible repo on lockbox


Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.
* store more metadata per instance created so we can track who/when/where/what (requires tags in eucalyptus 3.3 - not existent as of now)


Default instance time to live would be a week.  
= Post-Production/2.0 =


Default network policy will allow only ports 80, 443, and 22 tcp.
* backup 'subscription' service - so users of the cloud can request that backups be performed on their instances and how they should happen


Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.  
* openid/fas integration might be nice.  


We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.
* increase capacity and gather more use cases.

Revision as of 19:06, 25 March 2014

Note -- there was a meeting held describing the current state of the cloud on March 25, 2014. The logs of that are more up to date than this wiki page.

Background

Fedora Infrastructure is running 2 private cloudlets for various infrastructure projects. One of these is the primary or 'production' cloud, the other is used for newer versions and testing different setups or software tech. Currently the primary cloud is running openstack folsom, the other is testing openstack havana.

History

In 2012 we setup 2 sets of machines for 2 cloudlets. We tested various cloud software on these cloudlets at various times.

Two Cloudlets

We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.

Current setup

Current setup (as of 2014-03-25) as described in #fedora-classroom: http://meetbot.fedoraproject.org/fedora-classroom/2014-03-25/infrastructure-private-cloud-class.2014-03-25-18.00.log.html

Use cases

Doesn't need persistent storage

  • Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed. It's unknown how many instances we would need here.
  • Coprs uses our cloud for a frontend, backend and builders. Builds are submitted to the frontend, the backend processes them and creates builder instances to build and then terminates build instances when complete.
  • Mass rebuilds of Fedora packages. This could be done for testing a new global rpm/package change, or to discover FTBFS (Fails to build from source) packages. This would use as many builders as we could easily spin up to reduce time for building all 10,000+ Fedora packages. Could use the chainbuilding setup as above as a scaffolding. Additionally, extra builder instances could be potentially used by the official build system during mass rebuilds to reduce rebuild time.
  • Docs folks need to generate i18n versions of docs. This would require an instance, tools and a script running. Then data is synced off and the instance could be destroyed.

Needs persistent storage, but possibly can use a /mnt ed volume

  • Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR.

Needs persistent storage and snapshots

  • Infrastructure Development hosts have been moved to this cloud. These instances could possibly be 'on demand' when development needs to take place. Currently we have about 8 development instances many on cloud. The rest should be migrated soon.
  • We may want to move some of our one-off instances that are outside phx2 into the cloud for easier management. Things like keyservers, unbound instances, listservers or hosted resources. This is not yet planned for.

Further down the road:

  • Instances for qa/packagers to test new packages or track down bugs.
  • Instances for demos or events to show off Fedora.

Setup / deployment

This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.

We have 8 physical servers for this deployment. Currently 6 of them are in the 'production' cloudlet, and 2 are available for testing new deployments.

Policies

We need to setup clear policies on usage and access to the private cloud. In general we plan to open things to a small group of trusted contributors, take their feedback and usage and expand access out to larger groups as capacity and desire allows.

Users or groups that need rare one off images can simply request one via a ticket. Users or groups that often need instances will be granted accounts to spin up and down their own images.

Instances may be rebooted at any time. Save your data off often.

Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.

Default network policy will allow only ports 80, 443, and 22 tcp.

Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.

We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.

Images

We should customize available images for the above use cases:

All images

Currently for Fedora we are using the produced cloud images. For RHEL we are using a similar minimal instance image.

Infrastructure Dev Instances

Based on rhel6 image.

Should contain:

mod_wsgi, httpd, git-core, puppet, persistent volume mounted on /srv

QA images

TBD

Builder Images

For mockchain/kopers use. Should be limited to 24 hours.

Using ansible with the cloud

TBD: fill in with info on how to make transient or persistent instances via ansible on lockbox01.

Moving to "production"

This section is a checklist of things we need to do before we can consider either of the cloudlets "production". Once we move to production mode on them we will move to scheduling outages, try and keep instances running smoothly and just perform upgrades and maint on the cloudlets. We want to make sure before we do this that things are stable and processes are ready for users.

  • SOP needs to be written for creating images. (whats in them, update policy, ssh keys policy, etc)
    • We can reuse the Fedora cloud images for Fedora
    • We still need to determine a 'standard' RHEL6 image.
    • ssh keys should be added for root for sysadmin-main and sysadmin-cloud?
  • Decide who gets a login to manage instances, and who can just request instances be made for them.
    • Normal use cases have instances created by ansible. If further access is needed it's granted on case by case.
    • Down the road we may want to integrate fas somehow, but not now.
  • SOP on making an instance for a requestor (via infra ticket?) yes, via ticket.
    • Write an instance-setup script tools to fetch from fas user ssh key based on the given fas login so that user can receive an email once instance is created and log in.
    • Do all instance creation in ansible?
  • Decide on time limits or other resource limits per account/tennat. Setup initial accounts/tenants.
  • OpenStack cloudlet needs ansible playbooks written to install/configure it.
  • OpenStack needs folsom testing performed. </stike> folsom installed now.
  • OpenStack needs vlan testing performed. vlans in use.
  • OpenStack needs non glusterfs testing done. current install has base fs.
  • Do we need to decide between euca and openstack? When? For now we are going to do both.
  • We need monitoring added. nagios? Controllers down, nodes down, capacity issues, etc.
    • run a nagios persistent image in either cloudlet and monitor the other?
  • We need reporting added. Note when instances are made, etc. Either logging to log02, or some seperate report for cloud-sysadmins. Possibly some export from the software about cpu/mem/disk, etc.
    • Could possibly be done at ansible creation time or via a gather script
    • email from ansible script is done now, still need reporting of non ansible instances.
  • Need to determine who has access to physical cloud machines. Repurpose sysadmin-cloud, setup fas and sudo?
    • if we (and we should) add fas, why not configure it to only create shell account for people who is admin or sponsor from sysadmin-cloud. Which bring us back to 2nd point where approved people from sysadmin-cloud could have access to request instances to be made.
    • shell access to the physical cloudlet machines doesn't grant you any access to the cloud software directly.
    • sysadmin-cloud will have access to the compute and head nodes, but no special access to the cloud instances.
  • Setup group that can run ansible against physical cloud machines for updates, etc. (see above question too)
  • Consider a re-occuring maint window for reboots/updates... ie, tell everyone that every month we have a window to do so, save work before then?
    • will just schedule these as needed.
  • Figure out how to handle dns. Should we setup some kind of dyndns? Should we just leave it with generic dns? Should we ask for control of reverse dns?
    • We have asked for control of reverse dns.
  • How do we back these systems up and what should we actually be backing up.
    • At this point I'd say we don't back up and note to users to always back up their data often.
  • Figure out how to make some system be or seem to be persistent This can be done in ansible repo on lockbox
  • store more metadata per instance created so we can track who/when/where/what (requires tags in eucalyptus 3.3 - not existent as of now)

Post-Production/2.0

  • backup 'subscription' service - so users of the cloud can request that backups be performed on their instances and how they should happen
  • openid/fas integration might be nice.
  • increase capacity and gather more use cases.