User Driven Testing

= User Driven Testing =

Summary
Currently, most of the testing of Fedora is done by a small set of dedicated contributors. Fedora has a large userbase, some of whom are surely qualified to perform basic testing. By building an opt-in infrastructure, it will be possible to leverage the time and resources of willing users to perform testing of releases and updates.

Logical Flow
1. At some point, very early in the process (firstboot? first login?), the user is asked if they would be willing to participate in user-driven testing. It is explained to them that in Fedora, updates to packages (or, perhaps in the case of a beta release, the whole release) need to be tested by users, and that if they opt-in, they will be prompted from PackageKit about updates which need user testing. Users can choose to opt-out at any time.

2. They can choose an update which needs testing from a list.

3. User is prompted to authenticate to FAS. (Maybe this should happen at some other point, so the user is presented with a custom list, eliminating items already tested by them.)

4. Once an update is selected from the list, PackageKit will apply the update from updates-testing, then open a new window which contains: * General update testing advice * Package specific update testing advice (this can live on the wiki, either Fedora's or a generic "FOSS QA Components" wiki) * A graphical selector for giving +1 (works great!), 0 (cannot determine state) or -1 (something didn't work) ''spot's note: Given that the Bodhi karma methodology seems to be up in the air at the moment, this is not by any means set in stone. ''The user should have a very simple and clean way to report the results of their testing. * A text box for inputting comments

5. The user then submits the results, which go into Bodhi. Once results are submitted, that update no longer appears in the PackageKit "updates which need testing" list.

6. If they report a 0 or -1, they are then prompted to back out the update by PackageKit (at their choice).

Backend Infrastructure
On the backend, should a user choose to opt-in, they would be prompted to create a FAS account (or authenticate to an existing FAS account) (e.g. RHN handling in the past). They would _NOT_ be required to sign the Fedora CLA in order to participate in user-driven testing, as reported results from QA testing has already been determined to be non-copyrightable and thus, not considered a contribution.

Each user who opts-in to perform user-driven testing will have it flagged in their account. Each successful update testing submission will be minimally logged (package, target, timedate stamp) and a count incremented for unique update feedback performed.

Rewarding Participants
In thanks for their testing, users will be informed (when signing up for user-driven testing) that they will receive Fedora swag, both in random drawings and at certain threshold points (give good feedback on N updates and get a Fedora Tester T-shirt).

Scope of Testing
The intent is to help our users test Fedora beyond the "things we can test in an automated fashion" (e.g. not to use this as a mechanism for our users to mindlessly run "rpmlint" for us). The testing scope should be focused on tests which cannot be easily automated.

We should also keep in mind that this will also serve as an onramp for new contributors to Fedora, so the barrier to participation should be as low as possible, with the understanding that some packages are more complicated than others.

Custom User Tests
While initially, QA and Fedora Packagers will be leveraged to identify testing that users can perform, we also want to encourage users to create and submit their own tests.

Test Materials
If possible/applicable, test materials should include screenshots and descriptive text to help the user identify when the application behaves properly. This may require more significant upkeep, especially across multiple releases for graphical applications with changing UIs. The technology components should be able to support this.

Beta Testing
It should be possible for the same testing infrastructure to be possible for beta testing with some additional work. Think of this as reviving the old "testers-list" tradition, with new shiny tools to lower the barrier for involvement.

Live Image Beta Testing Logic Flow
1. User boots Fedora Live Beta (or other Test) Image

2. User is prompted to participate in User-driven testing for Beta

3. User authenticates to FAS

4. User is presented with a list of tests for users to perform for the Beta, such as (but not limited to): * Does the video card/sound card/networking/keyboard/mouse/camera/ work? * Do critical path packages work properly?

5. User picks tasks, reports success or failure with notes.

6. These reports would not go into Bodhi, but into some other tracker for these issues. (Ed. note: This would need to be built.)