From Fedora Project Wiki

(→‎Triggers: clarify some of the composes.)
(→‎Triggers, Tests, and Notifications: Update triggers/tests with links to work in progress.)
 
(3 intermediate revisions by 2 users not shown)
Line 1: Line 1:
It is time that Fedora starts doing automated QA testing.
It is time that Fedora starts doing automated QA testing.


Many minds met during FUDCon F11 Boston 2009 in order to brain storm on [http://alt.fedoraproject.org/pub/alt/videos/2009/FUDConF11/qa.ogg automated testing].
Many minds met during FUDCon F11 Boston 2009 (January 9-11, 2009) in order to brain storm on [http://alt.fedoraproject.org/pub/alt/videos/2009/FUDConF11/qa.ogg automated testing].


We identified a number of "triggers" to react to in order to test things, a number of tests to perform, and a set of places to notify of the results.  Here is a quick dump of these three things.
We identified a number of "triggers" to react to in order to test things, a number of tests to perform, and a set of places to notify of the results.  Here is a quick dump of these three things.


== Triggers ==
== Triggers, Tests, and Notifications ==
=== Triggers ===
* cvs checkin
* cvs checkin
* koji build
* koji build
* bodhi request
* bodhi request
* repo compose
* [http://git.fedorahosted.org/git/?p=autoqa.git;a=blob;f=post-repo-update/watch-repos.py;hb=HEAD repo compose]
* install tree compose
* install tree compose
* manual (periodic)
* manual (periodic)
Line 17: Line 18:
* comps change
* comps change


== Tests ==
=== Tests ===
* repository Sanity (sublist here)
* [http://git.fedorahosted.org/git/?p=autoqa.git;a=tree;f=post-repo-update;hb=HEAD repository Sanity] (sublist here)
* package installation
* package installation
* profile testing
* profile testing
Line 31: Line 32:




== Notifiers ==
=== Notifiers ===
* <package>-owner@fedoraproject.org
* <package>-owner@fedoraproject.org
* project webpage
* project webpage
* fedora-test-list@redhat.com or fedora-devel-list@redhat.com
* fedora-test-list@redhat.com or fedora-devel-list@redhat.com


Also discussed were timeline targets.  Next 3 weeks, next 3 months, next 3 quarters.  We feel we can accomplish a number of the triggers and tests within the next 3 weeks, focusing more package and repo sanity.  In the next 3 months we can likely add functional testing of installer via [http://jlaska.livejournal.com/3230.html Lab] [http://jlaska.livejournal.com/3696.html in a] [http://jlaska.livejournal.com/3910.html Box].  Over the next 3 quarters we hope to be able to expand functional testing to individual packages.
== Implementation ==
 
=== Build acceptance testing ===
 
This would cover the triggers:
 
* cvs checkin
* koji build
 
And the tests:
 
* package installation
* rpmdiff
 
When a build is done in Koji, the last action is to "tag" the build for a given collection.  This in essence is the "acceptance" step, in that Koji accepts that the build was successful and tags it to be included in a collection.  Koji doesn't have to do that tag step automatically.  In fact, if you build with --skip-tag, Koji will not apply the tag after the build is complete.  The build will still be imported into koji, and can be accessed for a period of time before it would be garbage collected.  This gives us an opportunity to test before tagging.
 
In an ideal world, for certain targets Koji would start the build, and proceed through except for not tagging the build.  It would announce via a message bus that a build was completed.  An automated testing system would notice this build and proceed to run a series of tests on the build.  Should the tests succeed, then via another message on the bus, a utility will then apply the tag to the build within Koji and the build will be "accepted".  Should the tests fail, various parties will be notified, and via some UI somebody could either override the failed tests and force the build to be accepted, or just ignore it and let the build be garbage collected as newer builds are prepared to pass the tests.
 
The tests ran on the build should be well proven to find actual problems and not cause a lot of false positives.  Just like anything in Fedora, the testing system should be informative, but allow the maintainer (or other authorized party) to override the tests and force the build.  Our maintainers should know best, but we should give them the tools and information to better help them make their decisions.
 
== Timeline ==
 
Also discussed were timeline targets.  Next 3 weeks, next 3 months, next 3 quarters.  We feel we can accomplish a number of the triggers and tests within the next 3 weeks (from January 11 2009), focusing more package and repo sanity.  In the next 3 months we can likely add functional testing of installer via [http://jlaska.livejournal.com/3230.html Lab] [http://jlaska.livejournal.com/3696.html in a] [http://jlaska.livejournal.com/3910.html Box].  Over the next 3 quarters we hope to be able to expand functional testing to individual packages.

Latest revision as of 19:27, 17 February 2009

It is time that Fedora starts doing automated QA testing.

Many minds met during FUDCon F11 Boston 2009 (January 9-11, 2009) in order to brain storm on automated testing.

We identified a number of "triggers" to react to in order to test things, a number of tests to perform, and a set of places to notify of the results. Here is a quick dump of these three things.

Triggers, Tests, and Notifications

Triggers

  • cvs checkin
  • koji build
  • bodhi request
  • repo compose
  • install tree compose
  • manual (periodic)
  • iso compose
  • new test creation
  • post test
  • comps change

Tests

  • repository Sanity (sublist here)
  • package installation
  • profile testing
  • tree sanity
  • rpmdiff
  • build log sanity
  • source sanity
  • build root sanity
  • fails to build from source
  • make check
  • comps grammar


Notifiers

  • <package>-owner@fedoraproject.org
  • project webpage
  • fedora-test-list@redhat.com or fedora-devel-list@redhat.com

Implementation

Build acceptance testing

This would cover the triggers:

  • cvs checkin
  • koji build

And the tests:

  • package installation
  • rpmdiff

When a build is done in Koji, the last action is to "tag" the build for a given collection. This in essence is the "acceptance" step, in that Koji accepts that the build was successful and tags it to be included in a collection. Koji doesn't have to do that tag step automatically. In fact, if you build with --skip-tag, Koji will not apply the tag after the build is complete. The build will still be imported into koji, and can be accessed for a period of time before it would be garbage collected. This gives us an opportunity to test before tagging.

In an ideal world, for certain targets Koji would start the build, and proceed through except for not tagging the build. It would announce via a message bus that a build was completed. An automated testing system would notice this build and proceed to run a series of tests on the build. Should the tests succeed, then via another message on the bus, a utility will then apply the tag to the build within Koji and the build will be "accepted". Should the tests fail, various parties will be notified, and via some UI somebody could either override the failed tests and force the build to be accepted, or just ignore it and let the build be garbage collected as newer builds are prepared to pass the tests.

The tests ran on the build should be well proven to find actual problems and not cause a lot of false positives. Just like anything in Fedora, the testing system should be informative, but allow the maintainer (or other authorized party) to override the tests and force the build. Our maintainers should know best, but we should give them the tools and information to better help them make their decisions.

Timeline

Also discussed were timeline targets. Next 3 weeks, next 3 months, next 3 quarters. We feel we can accomplish a number of the triggers and tests within the next 3 weeks (from January 11 2009), focusing more package and repo sanity. In the next 3 months we can likely add functional testing of installer via Lab in a Box. Over the next 3 quarters we hope to be able to expand functional testing to individual packages.