QA:Release validation test plan
Before an official Fedora release comes out, Alpha and Beta pre-releases are made. Alpha, Beta and Final (GA) are the milestones for a Fedora release. At each milestone, several "test compose" (TC) and "release candidate" (RC) builds are composed and tested to ensure the build finally released as the Alpha, Beta or final release meets certain requirements. This document describes how this testing is carried out. You can contribute by downloading the candidate builds and helping to test them.
Testing will involve executing test cases to verify installation and basic functionality on different hardware platforms for the various Fedora products. Everyone is encouraged to test and share your ideas, tests, and results.
For further information, help with getting involved, or to send comments about installation testing, please contact the QA group.
The goal of release validation testing is to ensure the release candidate compose which is ultimately released as the milestone (following the Go No Go Meeting) meets the Fedora Release Criteria, which define the minimum requirements for Fedora releases.
The QA team and the Product working groups - Server, Workstation and Cloud - share responsibility for conducting testing. Working groups are particularly expected to contribute to the execution of tests that are significant to their products.
Scope and Approach
Testing will include:
- Manually executed test cases in bare metal, virtual and cloud environments of the primary Architectures using the various Fedora release deliverables (installer images, live images, disk images, and package trees used for network installation and upgrades)
In future, some automatically executed test cases via the Taskotron system are expected to be included in release validation testing, but these tests are not yet ready. For more information about automatic testing, please see the Taskotron sub-pages, especially the install automation plan.
The release validation tests, taken together, should provide coverage for the full set of Fedora_Release_Criteria, which define the actual requirements that Fedora releases must meet.
Validation test events are expected to result in the identification of behaviour that does not meet the relevant release criteria. Each individual issue of this kind of is considered a "release blocker bug". As they are identified, these should be reported, and marked as proposed release blocker bugs according to the QA:SOP_blocker_bug_process. A single iteration of the process is expected to end when a release candidate build is fully tested and no release blocker bugs are discovered. That build is then expected to be approved for release.
Other bugs discovered during testing should be reported as usual, and may be proposed as "freeze exception bugs" according to the QA:SOP_freeze_exception_bug_process, where more information on the nature and purpose of the "freeze exception" concept can be found.
Timing of validation test events
Validation test events occur before milestone releases (e.g. Alpha, Beta, Final). Each test compose and release candidate build constitutes a single test event. A detailed list of QA events for Fedora 21 can be found at http://fedorapeople.org/groups/schedule/f-21/f-21-quality-tasks.html . The prototype schedule for a Fedora release can be found at Fedora Release Life Cycle.
The procedure for running a validation testing event is documented as the release validation testing procedure.
Test organization, execution and result tracking
For each test event, several pages - often referred to as "matrix pages" or "matrices", as the tables they contain are sometimes called "test matrices" - are created from templates which list the necessary tests in several different areas, and also serve to record the results of the tests.
The basic workflow of validation testing is to download one or more images from a given candidate build, load up one or more of the result pages for that candidate build, and run some of the test cases: give priority to earlier release levels (Alpha tests before Beta tests before Final tests) and tests that have not yet already been conducted by anyone else. Report any bugs you encounter, and then enter the results of your test into the results page by editing it: help on doing this is included in the 'source code' of the page, and don't worry if you make a mistake, it can easily be reverted or fixed.
Tests are associated with a milestone (Alpha, Beta, Final) or listed as Optional. All Alpha tests must be completed without encountering release blocker bugs before the Alpha release, Beta tests before the Beta release, and Final tests before the Final release. Optional tests never have to be completed with any particular result or indeed at all, but are listed as it is useful to conduct them (and file any bugs discovered) if time is available. Ideally, all tests would be run for all builds - this is rarely possible, but it is good to run more than the minimum if possible. The mandatory test types are:
The release validation testing procedure explains how to generate the pages for each test event using a template system.
Current and recent results pages can be found in these categories:
The current (or most recent, if you are reading this between the release of one milestone and the first test compose for the next) results pages can be found here:
- Test Results:Current Installation Test
- Test Results:Current Base Test
- Test Results:Current Desktop Test
- Test Results:Current Server Test
- Test_Results:Current Cloud Test
- Test_Results:Current Summary
- A full set of results pages for each candidate build
- Full test coverage for the tests associated with each milestone, ideally for the final release candidate build, but at least combined across all release candidate builds
- Detailed bug reports for all issues encountered during testing, nominated as release blocker or freeze exception bugs where appropriate
Test results can be carried over from one test event to a later one if it is reasonably certain that the changes between the candidate builds in question do not affect the codepaths exercised in the test case in question. If there was any change that may affect the test case, it should be re-run. Detailed instructions for carrying test results forward are provided as comments in the source of the test results pages.
References and helpful pages
- How to report a bug
- How to debug installation problems
- Installer boot parameters
- Kickstart (scripted installation) config file format
- Using update image files (may be useful to test installer fixes)
Note that this page supersedes the following now-obsoleted pages: