From Fedora Project Wiki

Introduction

This document describes the tests that will be created and used to verify the functions/components of Fedora 21.

The goals of this plan are to:

  • Evaluate from scratch the desired test coverage for a Fedora.next-based release
  • Serve as a basis for deciding how much of that coverage is practically possible, and how it should be divided between teams
  • Function as a reference document as we draw up new test cases and subsidiary plans to cover the Fedora.next products, and move forward with that testing for the first time
  • Help us evaluate the test effort after the release of Fedora 21

Test Strategy

As has been the case for some time, there will be four broad strands of Fedora 21 testing:

Scope

Release deliverables

As regards Fedora 21 release deliverables, our goal is to verify to the best of our abilities whether the deliverables for each milestone meet the common and product-specific Release_criteria. Priority in this area should go to the deliverables considered most vital by the project as a whole. This is likely to include the Fedora.next product deliverables, the KDE live image, and generic (non-Product-specific) network install and DVD images if either/both are produced. See:

Packages and repositories

The goal of both automated and manual testing in this area is to prevent errors and bugs being introduced to the Fedora 21 repositories, both pre- and post-release. Our priorities are to catch updates which violate the Updates_Policy, break critical path functionality, prevent system updates from working correctly for end users (e.g. dependency problems or upgrade path violations), and prevent the composition of images (especially test compose / release candidate builds).

Responsibilities

QA team

The QA team's responsibilities are:

  • Overall responsibility for Fedora 21 testing, as with all Fedora testing.
  • Ensuring this test plan itself is implemented: i.e. to ensure that all the test mechanisms described herein are created, and that the testing envisaged under this plan is carried out (or, if there are insuperable problems in accomplishing either, alerting FESCo, the Board and/or the Project_Leader to the problems).
  • Maintaining the non-Product-specific release criteria and release validation test cases. QA team will request input from other teams, particularly from the Base working group, developers/package maintainers, and from ReleaseEngineering, as appropriate.
  • Maintaining the release validation process documentation.
  • Requesting test compose and release candidate builds, per QA:SOP_compose_request.
  • Performing non-Product-specific release validation testing.
  • Performing non-Product-specific ad hoc / unplanned testing.
  • Maintaining the automated testing infrastructure.
  • Overseeing the Test Day process.

Product Working Groups

The Fedora.next#Working_groups are responsible for drawing up release criteria and release validation test cases specific to their products: that is, the Server WG is responsible for drawing up a set of Server-specific release criteria and test cases, the Workstation WG is responsible for drawing up a set of Workstation-specific release criteria and test cases, and so forth. The QA team will provide assistance with this work.

Shared responsibilities

The QA team and Working Groups share responsibility for performing Product-specific validation testing. In particular, Working Groups will be expected to contribute strongly to testing which requires substantial Product-related expertise and/or equipment/configuration.

The QA team and Working Groups also share responsibility for performing ongoing ad hoc / unplanned testing of components that relate to Products. As a rule of thumb, the more deeply a component is tied to a Product, the more the Working Group rather than the QA team should be considered responsible for testing it.

Test tasks

Release validation testing

The QA team will request an initial test compose build for each milestone on the date defined in the release schedule, and subsequent test compose builds as required (see QA:SOP_compose_request. The QA team will request an initial release candidate build for each milestone once the change deadline for that milestone has passed and all outstanding accepted release blocker bugs for that milestone are addressed. They will request subsequent release candidate builds as required (again, see QA:SOP_compose_request).

For each test compose and release candidate build at each release milestone, the QA team and each Working Group will carry out their responsibilities under QA/SOP_Release_Validation_Test_Event - creating a set of test result 'matrices' (effectively subsidiary test plans to this document). The QA team and Working Groups will work together to complete all required test cases across the full set of result matrices for each milestone. Test cases may be associated with Alpha, Beta or Final milestones; all Alpha test cases must be completed at the Alpha milestone, all Alpha and Beta test cases must be completed at the Beta milestone, and all Alpha, Beta and Final test cases must be completed at the Final milestone. Test cases not associated with any milestone are optional, and are not required to be completed.

Where a test results in a failure that appears to constitute a violation of the Release_criteria, the tester will file a proposed blocker bug, as per the QA:SOP_blocker_bug_process. Testers will file proposed freeze exception bugs at their discretion, as per the QA:SOP_freeze_exception_bug_process. The QA team will review proposed blocker and freeze exception bugs together with the development and release engineering teams, as described in the SOPs and QA:SOP_Blocker_Bug_Meeting.

Automated testing

The QA team will ensure the automated testing framework (Taskotron) performs and correctly reports the results for the currently operational set of automated tests, either according to the intended schedule or in response to the intended triggering event (e.g. a package build submission).

Ongoing manual testing

QA team and Working Group members will maintain running Fedora 21 systems both pre- and post-release, manually test package updates as they appear, and provide feedback via the Bodhi system, according to the guidelines in QA:Updates_Testing and QA:Update_feedback_guidelines.

Required resources

The required resources for Fedora 21 testing are expected to be noticeably greater than for previous releases, due to the added burden of Fedora.next Product-specific testing. TODO: provide more precise information based on Product-specific test plans once available. Historically, release validation testing has required the full time attention (in terms of professional working hours) of 3+ people, added substantial part-time attention of 6+ people, and casual participation of another 6+. As a ballpark estimate, Product-specific testing of a reasonable standard may require at least the added full-time attention of one person per product, substantial part time attention of another per product, and casual participation of another 2-3+ per product.

The Test Day process requires something on the order of ten hours of work per week on the part of a Test Day co-ordinator, in the weeks when Test Days take place.

Ongoing unplanned testing is by nature flexible in terms of resources, and can be considered as a best effort by interested testers.

TODO: automated testing resource requirements.

Schedule

The Fedora 21 schedule is the responsibility of FESCo.

Risks and contingencies

The major risk of Fedora 21 testing (as with all Fedora releases) is that insufficient human resources are available to provide a reasonable level of release validation testing.

To avoid this risk, we should try to ensure early in the cycle that sufficient resources will be available, and try to focus our resources efficiently on the most high-value tests.

Possible contingency plans for insufficient release validation testing are:

  • Ship all the bits we have and hope for the best
  • Ship only the most well-tested deliverables, and/or provide disclaimers and downgrade the publicity around less well-tested ones
  • Delay the product's release to provide further testing time