Writing AutoQA Tests

Introduction
Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.

Write test code first
I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.

You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.

The test directory
Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are acceptable.

Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.

Next, from the directory, copy template files , and  into your test directory. Rename them to, and , respectively.

The file
The control file defines some metadata for this test and executes the test. Here's an example control file:

control file for conflicts test
DOC = """ This test runs potential_conflict from yum-utils to check for possible file / package conflicts. """ MAINTAINER = "Martin Krizek "

job.run_test('conflicts', **autoqa_args)

Required data
The following control file items are required for valid AutoQA tests:


 * DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
 * MAINTAINER: The contact person for this test. If it doesn't work or there are some problems, then we know who to contact.

Launching the test object
Most tests will have a line in the control file like this: job.run_test('conflicts', **autoqa_args) This will create a 'conflicts' test object (see below) and pass along all required variables.


 * A dictionary, containing all the event-specific variables (e.g. kojitag for post-koji-build event). Documentation on these is to be found in  files. Some more variables may be also present, as described in the template file.
 * A dictionary, containing all the event-specific variables (e.g. kojitag for post-koji-build event). Documentation on these is to be found in  files. Some more variables may be also present, as described in the template file.

Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.

The file
The file allows a test to define any scheduling requirements or modify input arguments. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.

All variables available in are documented in. You can override them to customize your test's scheduling. Basically you can influence:
 * Which event the test runs for and under which conditions.
 * The type of system the test needs. This includes system architecture, operating system version and whether the system supports virtualization (see autotest labels for additional information)
 * Data passed from the event to the test object.

Here is example file: archs = ['noarch']
 * 1) this test can be run just once and on any architecture,
 * 2) override the default set of architectures

labels = ['virt']
 * 1) this test may be destructive, let's require a virtual machine for it

if event in ['post-koji-build']: execute = True
 * 1) we want to run this test just for post-koji-build event;
 * 2) please note that 'execute' defaults to 'False' and to have
 * 3) the test scheduled, control.autoqa needs to complete with
 * 4) 'execute' set to 'True'

Similar to the file, the  file is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. However, it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.

Test Object
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the  test contains a file named , which defines a   class, as follows:

from autotest_lib.client.bin import utils from autoqa.test import AutoQATest from autoqa.decorators import ExceptionCatcher

class conflicts(AutoQATest): ...

The name of the class must match the name given in the  line of the control file, and test classes must be subclasses of the   class. But don't worry too much about how this works - the  contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.

AutoQATest base class
This class contains the functionality common to all the tests. When you override some of its methods (like,   or  ) it is important to call the parent method first.

The most important attribute of this class is  (an instance of   class) that is used for storing all test outcomes.

The most important methods include  that you are advised to use for logging test output and   that you can use for changing the way your test results are reported (if you're not satisfied with the default behavior).

Whenever your test crashes, the  method will automatically catch the exception and log it (more on that in the next section).

ExceptionCatcher decorator
When an unintended exception is raised during test setup, initialization or execution  and   decorator is used for these methods (the default), it calls the   method instead of simply crashing. In this way we are able to operate and submit results even of crashed tests.

When such event occurs, the test result is set to CRASHED, the exception traceback is added into the test output and the exception info is put into the test summary. Then the results are reported in a standard way (by creating log files and sending emails). Finally the original exception is re-raised.

If a different recovery procedure than  is desired, you may define the method and provide the method name as an argument to the decorator. For example, see below:

def my_exception_handler(self, exc = None): do something different

@ExceptionCatcher('self.my_exception_handler') def run_once(self, **kwargs): ...

You are advised to call the original  method inside your custom handler, if applicable.

setup
This is an optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:

@ExceptionCatcher def setup(self): retval = utils.system('yum -y install httpd') assert retval == 0 if utils.system('service httpd status') != 0: utils.system('service httpd start')

initialize
This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to initialize various structures, set  and similar attributes. This is an optional method.

All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.

run_once
This is where the test code actually gets run. It's the only required method for your test object.

In short, this method should build the argument list, run the test binary and process the test result and output. For example, see below:

@ExceptionCatcher def run_once(self, baseurl, parents, name, **kwargs): super(self.__class__, self).run_once

cmd = './potential_conflict.py --tempcache --newest ' \ '--repofrompath=target,%s --repoid=target' % baseurl

out = utils.system_output(cmd, retain_output=True) self.log(out, printout=False)

This above example will run the command and save its output. It will raise  if the command ends with non-zero exit code.

If you need to receive just the exit code of the command, use  method instead.

Additionally, if you need both the exit code and command output, use the built-in  method:

cmd_result = utils.run(cmd, ignore_status=True, stdout_tee=utils.TEE_TO_LOGS, stderr_tee=utils.TEE_TO_LOGS) output = cmd_result.stdout retval = cmd_result.exit_status

Useful test object attributes
instances have the following attributes available :

outputdir      eg. results/ / resultsdir     eg. results/ //results profdir        eg. results/ //profiling debugdir       eg. results/ //debug bindir         eg. tests/ src            eg. tests/ /src tmpdir         eg. tmp/ _

Test Results
The  class provides a   attribute to be used for storing test results. This is an instance of  class and serves as a container for everything related to the test outcome.

ID
Every test run should contain a string identification of the test in. This doesn't have to be unique, but it should be descriptive enough for the log reader to quickly understand what has been tested. For RPM-based tests this will be probably a build NVR. For Bodhi update-based tests this will be probably a Bodhi update title. For yum repository-based tests this will be a repository name or address. And so on.

Usually the ID is set in  method (ASAP) because it is not changed throughout the test:

@ExceptionCatcher def initialize(self, config, nvr, **kwargs): super(self.__class__, self).initialize(config) self.detail.id = nvr

Architecture
If your test is not an architecture-independent test (aka noarch test, the default), you must set the architecture of tested items in. That is usually also done in  method.

Overall Result
The overall test result is stored in. You should set it in  according to the result of your test. You can choose from these values:


 * - the test has passed, there is no problem with it
 * - the test has passed, but there is some important information that a relevant person would very probably like to review
 * - the test has failed, requirements are not met
 * (default)- the test has failed, but a relevant person is needed to inspect it and possibly may waive the errors
 * - some third party error has occurred (networking error, external script used for testing has crashed, etc) and the test could not complete because of that. Re-running this test with same input arguments could solve this problem.
 * - the test has crashed because of a programming error somewhere in our code (test script or autoqa code). Close inspection is necessary to be able to solve this issue.

If no value is set in, a value of   is used.

If an exception occurs, and is caught by the  decorator (i.e. you don't catch it yourself),   is set to.

Using ABORTED result properly
If you want to end your test with  result, simple set   and then re-raise the original exception. will be filled-in automatically (extracted from the exception message), if empty.

try: //download from Koji except IOError, e: //or some other error self.detail.result = 'ABORTED' raise

If you don't have any exception to re-raise but still want to end the test, again set, but this time be sure to also provide an explanation in   and then end the test by raising. Alternatively you can provide the error explanation as an argument to the  class instead of filling in.

from autotest_lib.client.common_lib import error foo = //do some stuff if foo == None: self.detail.result = 'ABORTED' raise error.TestFail('No result returned from service bar')

Summary
The  should contain a few words summarizing the test output. It is then used in the log overview and in the email subject. E.g. for conflicts test it can be "69 packages with file conflicts". Or for rpmlint test it can be "3 errors, 5 warnings". Don't repeat test name, test result or test ID in here.

Output
Log any test output you want to save (to be emailed and stored in a log file on the server) by using  method. Usually this is used for saving important information throughout the test. For rpmlint test this may include information which RPM packages will be downloaded and tested, and then the very output of the rpmlint command.

self.log('Build to be tested: %s' % nvr) cmd = '...' output = utils.system_output(cmd, retain_output=True) self.log(output, printout=False) ... self.log('Found %d errors in the command output' % errors, stderr=True)

The output is stored in  variable. You can modify it if needed (for example filter out some lines you don't want to see in the log), but you're highly discouraged from adding new content to that variable directly (use  method instead).

Highlights
If you want to emphasize a certain line or set of lines in your output, you can do it by highlighting them. That enables the reader to easily spot warnings and errors amongst hundreds of lines of output. It is usually done by:

self.log('RPM checksum invalid', highlight=True)

You can also provide a descriptive comment to the highlighted lines: self.log('= Problems overview =\n %s' % problems, highlight='This test will not pass until all of these problems are solved.')

If you need to highlight a line that has been already logged (e.g. part of the command output), you can directly assign to  variable (it is a list of tuples  ).

Additional log fields
You can provide some additional log fields to be displayed in your log. Explore  variable (instance of   class). Especially these attributes:
 * - allows you to provide additional detail for your summary section
 * / / - allows you to provide more information about which Kojitag/Bodhi update this test is related to
 * - allows you to specify any custom field for the log overview section

Extra Data
Further test-level info can be returned by using. The following example demonstrates extracting and saving the kernel version used when running a test: extrainfo = dict for line in self.results.stdout: if line.startswith("kernel version "): extrainfo['kernelver'] = line.split[3] ... self.write_test_keyval(extrainfo)

In addition to test-level key/value pairs, per-iteration key/value information (e.g. performance metrics) can be recorded:
 * 1)   - Store test attributes (string data).  Test attributes are limited to 100 characters.
 * 2)   - Store test performance metrics (numerical data).  Performance values must be floating-point numbers.
 * 3)   - Storing both, attributes and performance data

Log files and scratch data
For any test execution several logs are created. The default log is and it is being linked and emailed to any configured recipients. It is designated for the large audience.

There is also an. It contains everything that has been logged throughout the test execution (the default log may have some lines filtered out, but they should be always present in this log).

The most detailed log is debug log placed at at root test results directory. Into that file autotest automatically logs everything printed on the screen and the full output of any commands you run.

If you want to store a custom file (like your own log), just save it to  directory. All those files will be saved at the end of the test.

If you need to store some scratch data that should be discarded on test finish, use  directory for that.

Posting feedback into Bodhi
After the result of a test is known, it can be sent into Bodhi. You need to manually call  method at the end of our test and provide a   parameter.

# Report test result to Bodhi # 'name' variable contains Bodhi update title relevant for this test self.post_results(bodhi = {'title': name})

ResultsDB
Results are also automagically stored into the ResultsDB FIXME: add URL of resultsdb frontend. The only stuff, you need to take care of are some 'identification' data. For example, when your test deals with packages/builds, you want to store, maybe   or another usefull information.

To store this information, add appropriate key-value pairs to the  dictionary. self.detail.keyvals = {'kojitag': kojitag, 'envr': nvr}
 * 1) taken from rpmguard

Even though the key-value pairs are entirely up to your decision, please make sure, that you store envr(s) of the tested packages/builds/whatever-has-envr under the 'envr' key.

Also, you can store multiple values under the same key, when the value is list of strings: self.detail.keyvals = {'envr': ['foo', 'bar', 'foobar']}

Tests with multiple results per testrun
Some tests produce more results in one testrun (e.g. Depcheck). If you want to store them separately in ResultsDB, make sure to set the  attribute to True.

Then you can create multiple TestDetail instances (e.g. one for each tested package/update/whatever), fill them out (don't forget the test_detail.keyvals), and post them using  method. ... all_test_details = [] for build in all_builds_to_test: td = TestDetail(self, id = build['nvr'], arch = arch) all_test_details.append(td) ...    td.result = "PASSED" td.keyvals = {'envr': build['nvr'], ...} ... for td in all_test_details: self.post_results(td) ... self.test_detail.keyvals = {'envr': [build['nvr'] for build in all_builds_to_test]} ...

This will create multiple 'Testrun' records in the ResultsDB, one for each TestDetail created & reported. And one 'Job' record, which encapsulates all the 'Testruns'.

Install AutoQA from GIT
First of all, you'll need to checkout some version from GIT. You can either use master, or some tagged 'release'.

To checkout master branch: git clone git://git.fedorahosted.org/autoqa.git autoqa cd autoqa

To checkout tagged release: git clone git://git.fedorahosted.org/autoqa.git autoqa cd autoqa git tag -l git checkout -b v0.3.5-1 tags/v0.3.5-1
 * 1) now you'll get a list of tags, at the time of writing this document, the latests tag was v0.3.5-1

Add your test
The best way to add your test into the directory structure is to create a new branch, copy your test and make install autoqa. git checkout -b my_new_awesome_test cp -r /path/to/directory/with/your/test ./tests make clean install

Run your test
This is dependent on the event, your test is supposed to run under. Let's assume, that it is the. /usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run

This command will show you current koji builds e.g. No previous run - checking builds in the past 3 hours autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11 autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12 autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12 ... output trimmed ...

So to run your test, just select one of the lines, and add parameters, which will locally execute the test you just wrote. If you wanted to run rpmlint, for example, the command would be autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local

Debugging problems
If your test behaves incorrectly in certain cases, you can use python debugger even when it is run through AutoQA/Autotest framework. Add  line where you want to stop the program and begin debugging:

@ExceptionCatcher def run_once(self, **kwargs): super(self.__class__, self).run_once

do_some_stuff

import pdb pdb.set_trace

do_some_other_stuff

Then run your test with  option. You will be given a Pdb shell on the chosen line: 16:02:08 INFO | > /usr/share/autotest/client/site_tests/rpmlint/rpmlint.py(51)run_once 16:02:08 INFO | -> self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40)) help 16:02:22 INFO | (Pdb) 16:02:22 INFO | Documented commands (type help ): 16:02:22 INFO | ======================================== 16:02:22 INFO | EOF   bt         cont      enable  jump  pp       run      unt 16:02:22 INFO | a     c          continue  exit    l     q        s        until 16:02:22 INFO | alias cl         d         h       list  quit     step     up    16:02:22 INFO | args   clear      debug     help    n     r        tbreak   w     16:02:22 INFO | b      commands   disable   ignore  next  restart  u        whatis 16:02:22 INFO | break condition  down      j       p     return   unalias  where 16:02:22 INFO | 16:02:22 INFO | Miscellaneous help topics: 16:02:22 INFO | ========================== 16:02:22 INFO | exec pdb 16:02:22 INFO | 16:02:22 INFO | Undocumented commands: 16:02:22 INFO | ====================== 16:02:22 INFO | retval rv 16:02:22 INFO | list 16:02:37 INFO | (Pdb) 46  	        super(self.__class__, self).run_once 16:02:37 INFO | 47 16:02:37 INFO | 48  	        import pdb 16:02:37 INFO | 49  	        pdb.set_trace 16:02:37 INFO | 50  	        # add header 16:02:37 INFO | 51  ->	        self.log('%s\n%s\n%s' % ('='*40, nvr, '='*40)) 16:02:37 INFO | 52 16:02:37 INFO | 53  	        # download packages 16:02:37 INFO | 54  	        pkgs = self.koji.get_nvr_rpms(nvr, self.tmpdir, debuginfo=False, 16:02:37 INFO |  55  	                                      src=True) 16:02:37 INFO | 56  	        # run rpmlint

For a 10-minute introduction how to use Pdb read Debugging in Python.

Providing documentation
Every test should have at least basic documentation written in its file and in its test object. But if you intend to create a test for public consumption (not just for your own needs), you should provide proper and detailed documentation on these wiki pages.

All AutoQA test documentation is in Category:AutoQA tests, and the page name should be AutoQA tests/, where test name starts with uppercase letter. These pages will be linked from the log of every executed test.

See the existing documentation for inspiration how to create your own one.

Links

 * Verifying AutoQA tests

Autotest documentation

 * ControlRequirements (covers the requirements for control files)
 * AddingTest (covers the basics of test writing)
 * AutotestApi (more detailed info about writing tests)