- 1 Introduction
- 2 Test Strategy
- 3 New features of Fedora 17
- 4 Schedule/Milestones
- 5 Test Priority
- 6 Test Pass/Fail Criteria
- 7 Scope and Approach
- 8 Test Deliverables
- 9 Testing Tasks
- 10 Test Environment/Configs
- 11 Responsibilities
- 12 Risks and Contingencies
- 13 Reporting Bugs and Debugging Problems
- 14 Reviewers
- 15 References
This document describes the tests that will be created and used to verify the functions/components of Fedora 17.
The goals of this plan are to:
- Organize the test effort
- Communicate the planned tests to all relevant stake-holders for their input and approval
- Serve as a base for the test planning for future Fedora 17 releases
Instead of outlining all possible installation inputs and outputs, this test plan will focus on defining inputs and outputs at different stages in anaconda. This will also allow different tests to be performed independently during a single installation. For example, one may execute a kickstart delivery via HTTP, software raid0 partitioning using 3 physical disks, and a minimal package installation on a virtual guest all in single installation. Scenarios where the stages are dependent will be grouped by different installation media.
New features of Fedora 17
As with Fedora 16, Fedora 17 will bring us some new features. The following list outlines the larger changes that affect installation. Test plans for these features will be designed/developed on each feature page.
Additional features outside the scope of testing can be found at:
- The Fedora 17 release schedule is available at Releases/17/Schedule
- Each major milestone (Alpha, Beta, Final, etc..) will demand a full regression run
This test plan prioritizes tests according to the major release milestones for Fedora 17, including the Alpha, Beta and Final release milestones. All test cases are intended for execution at every milestone. However, priority should be given to tests specific to the milestone under test.
|Alpha test cases||Beta test cases||Final test cases|
|Alpha (formerly tier#1) priority tests are intended to verify that installation is possible on common hardware using common use cases. These tests also attempt to validate Alpha Release Requirements.||Beta (formerly tier#2) priority tests take a step further to include additional use cases and installation methods. These tests also attempt to validate Beta Release Requirements.||Final (formerly tier#3) priority tests capture all remaining use cases and installation pathways. These tests also attempt to validate Final Release Requirements.|
| Verification consists of:
|| Verification consists of:
|| Verification consists of:
Test Pass/Fail Criteria
The milestone release of Fedora 17 should conform these criteria:
- Trees must be generated using release engineering tools (not hand crafted)
- There must be no unresolved dependencies for packages included in the installation tree
- There must be no dependency conflicts for packages included in the installation tree
- Any changes in composition of the installation tree are explainable by way of bugzilla
Milestone specific criteria
Scope and Approach
Testing will include:
- Manually executed test cases using DVD,
boot.iso, PXE or live image media
- Automatically executed test cases via the AutoQA system. For more information about automatic testing, please see Is Anaconda Broken Proposal and the install automation roadmap.
- This test plan
- Test summary documents for each major milestone of F16: Category:Fedora_17_Test_Results
- A list of defects filed
- Any test scripts used for automation or verification
Testing will execute test cases to verify installation of Fedora 17 on different hardware platforms and gather installation test feedback.
- Manual installation Test Cases
- Auto-installation Test Cases
- Instructions for adding test result page
For Fedora 17, test cases will be executed on the primary supported hardware platforms. This includes:
Fedora QA team members are responsible for executing this test plan. Contributions from Branched testers and other interested parties are encouraged.
Risks and Contingencies
If new physical media are provided for an already inprogress test run, a new test run must be initiated. Test results from the previous run may be carried forward to the new test run if they are not affected by the changes introduced by the new physical media.
Reporting Bugs and Debugging Problems
If defects/problems are encountered, please go ahead and file the bugs following the guide below: