It is time that Fedora starts doing automated QA testing.
Many minds met during FUDCon F11 Boston 2009 (January 9-11, 2009) in order to brain storm on automated testing.
We identified a number of "triggers" to react to in order to test things, a number of tests to perform, and a set of places to notify of the results. Here is a quick dump of these three things.
Triggers, Tests, and Notifications
Triggers
- cvs checkin
- koji build
- bodhi request
- repo compose
- install tree compose
- manual (periodic)
- iso compose
- new test creation
- post test
- comps change
Tests
- repository Sanity (sublist here)
- package installation
- profile testing
- tree sanity
- rpmdiff
- build log sanity
- source sanity
- build root sanity
- fails to build from source
- make check
- comps grammar
Notifiers
- <package>-owner@fedoraproject.org
- project webpage
- fedora-test-list@redhat.com or fedora-devel-list@redhat.com
Implementation
Build acceptance testing
This would cover the triggers:
- cvs checkin
- koji build
And the tests:
- package installation
- rpmdiff
When a build is done in Koji, the last action is to "tag" the build for a given collection. This in essence is the "acceptance" step, in that Koji accepts that the build was successful and tags it to be included in a collection. Koji doesn't have to do that tag step automatically. In fact, if you build with --skip-tag, Koji will not apply the tag after the build is complete. The build will still be imported into koji, and can be accessed for a period of time before it would be garbage collected. This gives us an opportunity to test before tagging.
In an ideal world, for certain targets Koji would start the build, and proceed through except for not tagging the build. It would announce via a message bus that a build was completed. An automated testing system would notice this build and proceed to run a series of tests on the build. Should the tests succeed, then via another message on the bus, a utility will then apply the tag to the build within Koji and the build will be "accepted". Should the tests fail, various parties will be notified, and via some UI somebody could either override the failed tests and force the build to be accepted, or just ignore it and let the build be garbage collected as newer builds are prepared to pass the tests.
The tests ran on the build should be well proven to find actual problems and not cause a lot of false positives. Just like anything in Fedora, the testing system should be informative, but allow the maintainer (or other authorized party) to override the tests and force the build. Our maintainers should know best, but we should give them the tools and information to better help them make their decisions.
Timeline
Also discussed were timeline targets. Next 3 weeks, next 3 months, next 3 quarters. We feel we can accomplish a number of the triggers and tests within the next 3 weeks (from January 11 2009), focusing more package and repo sanity. In the next 3 months we can likely add functional testing of installer via Lab in a Box. Over the next 3 quarters we hope to be able to expand functional testing to individual packages.