From Fedora Project Wiki

< User:Tflink

Revision as of 20:42, 8 October 2013 by Tflink (talk | contribs) (→‎Later Work: changing verbage, linking to lists)

Introduction

There are a lot of moving bits and pieces to where we'd like to see qa automation go for Fedora. While it would be great to say that all of it could get done in a month or two, that's really not practical. We're aiming for a phased milestone approach to replacing and improving our existing systems in Fedora.

The list of projects that we're planning on is listed elsewhere. The aim of this document is to describe the order in which we're planning to tackle those tasks.

Phases

Each phase is listed separately. With the exception of phases 0 and 1, the order of these phases doesn't matter from a development perspective - they're mostly independent units of work.

Phase 0: Investigation and Preparation

Phase 0 (if it can be called a phase) is mostly for investigation and preparation. There are still non-trivial details that need to be ironed out and decisions made on initial direction and preparation work that still needs to be done.

Preparation

We still need to set up and/or configure tools like a bug tracker, CI, code reviews and repositories. This doesn't mean that we're going to set up our own instances for everything - some or all of those roles will be filled by existing Fedora services.

Investigation

The large item for investigation is Task execution

  • investigate the possible strategies
  • make a decision on which one to implement
  • determine whether a central database for tasks is needed

We also need to investigate the possibility of missing messages for scheduling and verify that the current strategy is likely to work.

Deliverables

  • Decision on method and plan for task execution
  • Results of fedmsg investigation and plan for any scheduling work that will be needed for phase 1
  • Support tooling configured, setup and ready for use

Phase 1: AutoQA Replacement

In order to reduce the amount of resources (both computing and human) spent on automation, the focus of phase 1 will be to fill the roles currently filled by AutoQA

Checks Run for All Packages/Updates

  • depcheck
  • rpmlint
  • upgradepath
  • repoclosure
  • rpmguard?

Depcheck

Depcheck does not currently work with yum in Fedora 19 and newer. As Fedora 18 will be going EOL approximately 30 days after the release of Fedora 20, this needs to be addressed soon.

This can be dealt with in one of two ways:

  • replace depcheck with a similar test that is easier to maintain
  • fix depcheck so that it works with yum in Fedora 19 and later, delaying the need to replace depcheck

Result Notification

Instead of implementing new methods for notifying maintainers of results, we will continue with using bodhi comments for check result notification.

Deliverables

  • Deployment of functional system capable of the following:
    • running appropriate checks on koji builds
    • running appropriate checks on bodhi updates
    • notification of results through bodhi comments
Idea.png
Notification for builds
what to do about package checks that aren't appropriate for bodhi? what do we want to do about email notification?


Note.png
TBD
The order of the next phases is not yet set as they don't depend on eachother much. We want to nail down phase 0 and phase 1 a bit better before getting into the order and detail of later phases

Phase X: Beaker Integration

Red Hat uses beaker for a lot of internal testing and we're working towards getting some of the checks from that ecosystem open sourced. Assuming that things go forward as planned, it will likely be worth enabling job submission to beaker from taskbot and integration of any relevant results.

Beaker Evaluation

While we have a demo instance of beaker set up in the Fedora infrastructure, we're haven't done a full evaluation of what the maintenance burden is or what our hardware needs would be in order to run any checks that we may receive from Red Hat QE or could be written.

Beaker Job Submission

  • Figure out how to use beaker's api to submit jobs

Beaker Results Integration

  • Figure out how to retrieve logs and job status from beaker
  • Determine what data we need to be retrieving

Automate Beaker Job Updates

Once we have beaker jobs, it would be better if we had some sort of CI system which was capable of rebuilding tasks on change and pushing them to the beaker server.

Deliverables

Tested code that allows for scheduling beaker jobs, integrating the results of those jobs and providing a facility for beaker job maintainers to update their jobs without needing to manually rebuild every time.

Phase Y: Installation Checks

One of the goals for taskbot is to cover many of the simple installation test cases that are in the test matrices. The main goal here would be to get those tests done without requiring the valuable time and effort of humans who could be spending their time on tests which are not so easily automated.

Production of Test Images

This could be done with our own tooling or images produced by releng. Wherever the test images come from, we'll need to figure out how to get them in a form that can be consumed by taskbot.

Runner Framework Evaluation and Decision

There are a couple of options that we need to look into for actually executing the tests.

There is also the issue of storing the tests (git isn't a great match for changing images) and how they're written. Something like Sikuli's IDE would be awesome but it would also be quite a bit of work.

Result and Log Storage

Does our result store need more features in order to integrate with installation tests? How will the installation logs be stored? Will we need to integrate with any other result stoarage systems?

Documentation

If we want this system to succeed, it will need to be well documented so that the test cases can be updated.

Deliverables

A documented method for creating, maintaining and running installation tests. This includes storing results and test artifacts (logs etc. from the installation) so that they are easily discoverable from the main result display.

Phase Z: More Sophisticated Checks

Another goal of taskbot is to allow for much more variety in the automated checks that we support. As a whole, this phase will likely take years to complete but that is due to the huge potential scope from cloud image checks to graphical application checks and user-maintained package-specific checks. Some of the possible directions are listed in the subproject list

There will be some overlap between what these features depend on but all have their own unique requirements.

Cloud Image Testing

The idea here would be to grab a cloud image and enable it on an openstack instance if it isn't already available. A battery of checks would then be run on openstack instances in order to verify basic functionality.

This would require some input (and hopefully assistance) from the cloud folks with regards to what checks are required for basic functionality.

Graphical Application Checks

This is a wide-ranging subset of checks that could take a variety of forms. One type is the Gnome Installed Tests where a command is run in the client to run through a battery of checks on the software for which they were designed. Another type is an application specific check which is written for an external tool like sikuli or Xpresser.

For either type of check, a test client with proper graphical support would be required. Some investigation into openstack would be required to determine whether its instances have the graphical capabilities we would need (also needed for phase Y). The gnome style checks are easier to implement as they are self-contained but the external style tests would require investigation into a method for storing the test cases (git is not very appropriate due to the large number of changing images) and the implementation/deployment of such a system. Both methods would likely require changes to task description methods and potentially changes to result storage.

User-Submitted Package-Specific Checks

User-submitted checks pose several challenges around security and tooling:

  • Client VMs must be isolated in case of runaway or malicious test code
  • Access to check storage needs to be opened up for external participation
  • Changes to scheduling will be required
    • What checks to schedule for which packages/updates
    • Handling checks that have been consistently crashing
    • Limitations on available resources (likely in the form of timeouts)
  • Methods for notifying check maintainers if there are check issues
  • Getting check maintainers the ability to debug production client instances

There are still a lot of variables here which prevent more detailed description. Upon the completion of Phase 1, most of those variables should be fixed and planning for this feature can proceed.

Later Work

This is not a complete plan for qa automation but there is a lot of work described here. While ideas and discussion around what has been planned is welcome, there are still many variables which need to be detemined before farther specific planning is practical and productive.

Discussion is welcome on the qa-devel list or in #fedora-qa on Freenode IRC.