Which tests to run
Tests with a Milestone of Alpha, Beta or Final are the most important. Optional tests are less so, but still useful to run. The milestone indicates that for that milestone release or a later one to be approved for release, the test must have been run against the release candidate build (so at Beta, all Alpha and Beta tests must have been run, for instance). However, it is important to run the tests for all milestones as early and often as possible. Please refer to the test coverage page linked above and try to find and run tests which have not been run at all, or not run recently, for the current release.
How to test
1. Download one or more media for testing:
2. Perform one or more of the test cases and add your results to the table below
- You can submit results by editing the page directly, or by using relval, with the
relval report-resultscommand. It provides a simple text interface for reporting test results.
3. If a test fails, file a bug report. You may propose the bug as a release blocker or freeze exception bug for the appropriate release - see blocker bug process and freeze exception bug process.
Some tests must be run against particular Products or images - for example, the #Default boot and install tests. If no particular product or image is specified either in this page or the test case page, you can use any appropriate image. For example, you can run most of the #General Tests with the Workstation live image, or either of the Server install images.
If you notice a problem during a test which does not constitute a complete failure of the test, you should still file a bug report, but it may not be appropriate to propose it as a release blocker bug. Use your judgment in deciding this, with reference to the Fedora_Release_Criteria, which these tests are intended to verify. If you are unsure, err on the side of proposing the bug as a blocker.
Results summary page
The Test Results:Fedora 24 Branched 20160518.n.0 Summary page contains the results from this page and all the other validation pages for the same compose listed together to provide an overview.
Add, Modify or Remove a Test Case
- Please request review for your changes by publishing your test case for review to the test mailing list and/or the appropriate working group mailing list (e.g. server, cloud, or desktop).
- Once reviewed, make your changes to any current documents that use the template (e.g. Test_Results:Current_Security Lab_Test).
- Lastly, update Template:Security Lab_test_matrix with the same changes.
See the table below for a sample format for test results. All test results are posted using the result template.
|Test Result||Explanation||Code Entered|
||Untested - This test has not been run, and is available for anyone to contribute feedback.|| |
||Passed - The test has been run and the tester determine the test met the expected results|| |
|Inprogress - An inprogress result is often used for tests that take a long time to execute. Inprogress results should be temporary and change to pass, fail or warn.|| |
|Failed - Indicates a failed test. A link to a bug must be provided. See Template:Result for details on providing bug information.|| |
||| Warning - This test completed and met the expected results of the test, but other issues were encountered during testing that warrant attention.
|Multiple results - More people can easily provide results to a single test case.|| |
||Result from previous test run - This test result is directly moved from the test run of previous <build>.|| |
|Unsupported - An unsupported test or configuration. No testing is required.|
|Milestone||Test Case||Security Lab|