From Fedora Project Wiki

Revision as of 16:17, 20 January 2014 by Kparal (talk | contribs) (add homepage link)

What is ResultsDB

ResultsDB is a system designed to hold the results of tests executed by the AutoQA framework.

Motivation behind ResultsDB

From the beginning of AutoQA development, all test results were stored on the autoqa-results mailing list [1]. While this is not a bad system for storing test results on a small scale, searching through these results (ie. find all test results for foobar-1.3-fc15) is overly difficult and cumbersome.

The primary motivation behind ResultsDB is to provide a simple, easily accessible interface to test results through simple querying API. ResultsDB is also capable of storing test results but this is a secondary objective.

Since ResultsDB is only a system for storing test results, different 'frontend' systems can be created using ResultsDB as a data source. These frontends could be a simple tool to aggregate recent results for packages (potentially a table for a package/update's results within bodhi or koji) or something more complicated like a tool for gathering test-related metrics (historical fail/pass ratio of a specific test in a Fedora release, failure rate of critpath packages etc.).

Current State of Development

At the time of this writing, most of the first version of ResultsDB has been completed:

  • The underlying database schema has been designed.[2]
  • The data input API to be used for submitting test results has been implemented.[3]
  • A proof-of-concept front end to display test results has been implemented using TurboGears2.[4]
  • A second proof-of-concept frontend to create test plans (ie. The Package Update Acceptance Test Plan [5]) has been started[6].


Down with direct SQL
During discussions, it was decided against using a direct SQL-based API. Instead, the preference was for a set of specific filter-methods with monitorable (?) arguments.

At the time of this writing, ResultsDB has a well defined API for storing results but the API for retrieving results has yet to be designed. The current thought behind the retrieval API is that it could be better designed once a proper production dataset exists.

When the Fedora Message Bus is deployed, that would provide another mechanism to monitor for and broadcast events.

Database Schema

Visit AutoQA_resultsdb_schema [2] to read up on the schema.

Generic Testplan Frontend

The testplan frontends are designed to be simple applications with an MVC [7] architecture.

  • Model - The data from ResultsDB makes up the model
  • Controller - Metadata stored in Mediawiki pages [8] makes up the Controller.
  • View - Simple frontend implementations[6] make up the view.

It is anticipated that multiple testplan frontends will be created for ResultsDB and this architecture should be flexible enough to support the implementation of several independent frontends.


A frontend was created for the Package Update Acceptance Testplan (PUATP) [5] as an example. There are multiple 'levels' of acceptance, based on the results of different tests. Many AutoQA tests are run on most, if not all, packages and the same data can be presented in different contexts to allow for easier understanding of that data.

At the time of this writing, FedoraQA is using MediaWiki for a TCMS. A method to store and retrieve test metadata on wiki pages was proposed[9] and is currently being used for the more complicated PUATP template[8].

The frontend [6] parses metadata from the wiki page, queries ResultsDB for test results from a specific NVR/testcase combination, and renders the information into an easily readable format.

Mediawiki Testcase Metadata

The semantics of metadata for the PUATP testcase [8] is quite simple.

  • testcases (required): A dictionary that connects the testcase name(key) to the URL of the testcase metadata (value). Testcase names (i.e. rpmlint or initscripts) are the primary method of reference beyond this dictionary.
  • testcase_classes (required): A list defining any number of 'classes'. Any class in this list must be farther defined in the metadata for the same testcase.
  • mandatory, introspection, advisory (test classes, user defined): Each test class is a dictionary which describes which testcases are to be taken into account, which results will be accepted, and what result should the test class have, if either of the tests is out of the specified set.
    • testcases: List of testcase names that fall under the specified class
    • pass: List of testcase results that qualify as 'accepted'. If all possible test results are in this set, the overall result of the test class will always be "PASSED".
    • on_fail: Overall test class result if any of the specified testcases return a result not on the 'pass' list