User:Tflink/Sandbox:AutoQA staging environment

Summary
This is a proposal for how to use the AutoQA staging environment for testing of AutoQA. There is a lot of work involved with this proposal and the idea would be to get this done in stages.

Benefit to AutoQA

 * More testing
 * Easier reproduction of bugs in current tests
 * One difficulty we have is that AutoQA's tight coupling with Fedora infrastructure makes it difficult to reproduce previous conditions in order to triage test bugs. By having a method of creating a stand-alone environment, condition recreation would be significantly easier.
 * Deprecation of current mascot
 * By finding bugs in AutoQA earlier, we can reduce frustration levels of packagers and help them find issues earlier

Deployment of Staging environment
When testing code, one question that frequently comes up is whether to start with a pristine environment or continually re-use an existing environment.

For stage 1, I don't think that this matters a whole lot because we're still going to be doing a majority of the testing manually and little automation will be used at that time.

Starting with stage 2, I think that using pristine environments is more important because we would be starting from a known state - clean. Once we get into stage 3 and use automation to check the results of pre-programmed use cases, I think that it would be even more important to start from scratch and be able to get predictible job numbers, dependent results etc. In my mind, at least, it would make the implementation of such a setup easier to handle.

Stage 1
On specified intervals (either change to master/stable or on a timed interval) create a new AutoQA environment in using the staging hardware. This could either involve destroying and re-creating a new environment or simply updating the existing one. This would be handled by some form of continuous integration and autoqa-in-a-box.

During stage 1, the actual testing would be somewhat manual. We would have to watch the results by hand to make sure that things are going smoothly.

Stage 1 Units of work

 * Continuous Integration setup
 * Probably hudson or buildbot
 * Try to find existing installation or use something shared

Stage 1 dependencies

 * AutoQA in a box

Stage 2
Stage 2 involves the addition of some mock infrastructure. The staging environment would still rely on the same production koji instance for build information and builds but results would be posted out of band in a mock setup (either mock bodhi or a staging instance of resultsdb, depending on timing).

The mock results would be posted and stored in such a way that would be easy to extract them from an external checker. During stage 2, the utility here would be limited to making it easier for us to check the results so that we don't have to comb through email results or web pages. No automated checking of results (except crashes or obvious failures) would be done in stage 2.

Stage 2 units of work

 * Mock fedora infrastructure
 * This would only require bodhi comments; still using production koji and read from bodhi
 * Possibly setup of staging resultsdb

Stage 3
Stage 3 would start integrating other elements of the mock infrastructure and start using pre-defined test cases to exercise AutoQA. These test cases would set up a pre-determined environment and would be able to verify that results posted were expected given the known environment.

Of the three stages, this one is the most vague. Stages 1 and 2 are not going to be trivial and we're still in the process of figuring out exactly what the tests for AutoQA are going to look like. Once we have that figured out, the exact details for stage 3 can be fleshed out.

Stage 3 units of work

 * Complete mock Fedora infrastructure
 * Read from koji, bodhi
 * set up package environment for testing
 * Use cases for AutoQA
 * Tests for AutoQA