QA:Release validation test plan
Before an official Fedora release comes out, Alpha and Beta pre-releases are made. Alpha, Beta and Final (GA) are the milestones for a Fedora release. At each milestone, nightly and 'candidate' composes are built and tested to ensure the build finally released as the Alpha, Beta or final release meets certain requirements.
This document describes how this testing is carried out. You can contribute by downloading the nightly composes and candidate builds and helping to test them.
Testing will involve executing test cases to verify installation and basic functionality on different hardware platforms for the various Fedora products. Everyone is encouraged to test and share your ideas, tests, and results.
For further information, help with getting involved, or to send comments about installation testing, please contact the QA group.
The goal of release validation testing is to ensure the release candidate compose which is ultimately released at each milestone (following the Go No Go Meeting) meets the Fedora Release Criteria, which define the minimum requirements for Fedora releases.
The QA team and the Product working groups - Server, Workstation and Cloud - share responsibility for conducting testing. Working groups are particularly expected to contribute to the execution of tests that are significant to their products.
Scope and Approach
Testing will include:
- Automated tests executed by openQA, Autocloud, and in the future Taskotron (see Taskotron install automation plan)
- Manually executed test cases in bare metal, virtual and cloud environments of the primary Architectures using the various Fedora release deliverables (installer images, live images, disk images, and package trees used for network installation and upgrades)
The release validation tests, taken together, should provide coverage for the full set of Fedora_Release_Criteria, which define the actual requirements that Fedora releases must meet.
Validation test events are expected to result in the identification of behaviour that does not meet the relevant release criteria. Each individual issue of this kind of is considered a "release blocker bug". As they are identified, these should be reported, and marked as proposed release blocker bugs according to the QA:SOP_blocker_bug_process. A single iteration of the process is expected to end when a release candidate build is fully tested and no release blocker bugs are discovered. That build is then expected to be approved for release.
Other bugs discovered during testing should be reported as usual, and may be proposed as "freeze exception bugs" according to the QA:SOP_freeze_exception_bug_process, where more information on the nature and purpose of the "freeze exception" concept can be found.
Timing of validation test events
Nightly validation events may be run at any point in the cycle. The validation event creation bot looks at each Rawhide and Branched compose and evaluates several heuristics to decide whether to create an event for it, with the intent of creating events regularly but not too often. Roughly, it creates nightly events only for the next release (when Branched exists, Rawhide events will not be created), never within three days of the previous event, otherwise if interesting package updates seem to have occurred or it has been more than 14 days since the previous event. It will create the Wikitcms test result pages and send an announcement email to the test-announce mailing list.
'Candidate' compose validation
'Candidate' compose validation test events occur before milestone releases (e.g. Alpha, Beta, Final). Each candidate compose constitutes a single test event. A detailed list of QA events for Fedora 26 can be found at http://fedorapeople.org/groups/schedule/f-26/f-26-quality-tasks.html . The prototype schedule for a Fedora release can be found at Fedora Release Life Cycle.
Organizing validation test events
The procedure for running a validation testing event is documented as the release validation testing procedure. It includes instructions for updating the wiki with the new result pages and other changes, and announcing the event on the mailing list.
Test organization, execution and result tracking
Test results are managed using the Wikitcms 'system' of wiki pages with specific names, content, and categorization. Each validation testing event (whether a nightly or candidate compose) will have a set of result pages.
The basic workflow of validation testing is to download one or more images from a given nightly compose or candidate build, load up one or more of the result pages for that compose, and run some of the test cases: give priority to earlier release levels (Alpha tests before Beta tests before Final tests) and tests that have not yet already been conducted by anyone else. Report any bugs you encounter, and then enter the results of your test into the results page, either by using the relval tool or by editing it directly (help on doing this is included in the 'source code' of the page, and don't worry if you make a mistake, it can easily be reverted or fixed).
Enhancements to the testing process
Results Summary page
For each compose, there is a results Summary page: see the current Summary page for an example. This page uses the mediawiki partial transclusion feature to display the results from each of the individual result pages together in one page. The volume might be overwhelming, but it is a handy way to see all test type results together. You can enter results via the Summary page, in most cases - mediawiki will cause the edit to be applied to the correct underlying result page.
Some useful information is available on test coverage; this is the page for the release currently under development. The pages provide a quick overview of the coverage for each validation test across all composes (nightly and candidate). This can be useful in various ways, but its main use for a tester is to see which tests have not been run recently or at all; please give such tests priority over tests which have already been run many times, to improve overall coverage. This information is produced by relval's module.
Reporting results with relval
The relval tool which generates the test coverage data and helps create the result pages can also report results, by editing the result pages on your behalf. You may find this more convenient than editing the page source directly. To report results for the current nightly or candidate compose, install relval with
dnf install relval, and then run
Tests are associated with a milestone (Alpha, Beta, Final) or listed as Optional. All Alpha tests must be completed without encountering release blocker bugs before the Alpha release, Beta tests before the Beta release, and Final tests before the Final release. Optional tests never have to be completed with any particular result or indeed at all, but are listed as it is useful to conduct them (and file any bugs discovered) if time is available. Ideally, all tests would be run for all builds - this is rarely possible, but it is good to run more than the minimum if possible. The mandatory test types are:
Current and recent validation test events
Current and recent results pages can be found in these categories:
The current (or most recent, if you are reading this between the release of one milestone and the first nightly compose for the next) results pages can be found here:
- Test Results:Current Installation Test
- Test Results:Current Base Test
- Test Results:Current Desktop Test
- Test Results:Current Server Test
- Test_Results:Current Cloud Test
- Test_Results:Current Summary
- A full set of results pages for each candidate and nominated nightly compose
- Full test coverage for the tests associated with each milestone, ideally for the final release candidate build, but at least combined across all release candidate builds
- Detailed bug reports for all issues encountered during testing, nominated as release blocker or freeze exception bugs where appropriate
Test results can be carried over from one test event to a later one if it is reasonably certain that the changes between the candidate builds in question do not affect the codepaths exercised in the test case in question. If there was any change that may affect the test case, it should be re-run. Detailed instructions for carrying test results forward are provided as comments in the source of the test results pages.
References and helpful pages
- How to report a bug
- How to debug installation problems
- Installer boot parameters
- Kickstart (scripted installation) config file format
- Using update image files (may be useful to test installer fixes)
Note that this page supersedes the following now-obsoleted pages: