From Fedora Project Wiki
No edit summary
(remove AutoQA category)
 
(26 intermediate revisions by 2 users not shown)
Line 2: Line 2:
{{draft}}
{{draft}}
== Introduction ==
== Introduction ==
Here's some info on writing tests for [[AutoQA]]. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA. Typically they all live in a single directory, located in the [http://git.fedorahosted.org/git/?p=autoqa.git;a=tree;f=tests tests/ dir of the autoqa source tree].
Here's some info on writing tests for [[AutoQA]]. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the [http://git.fedorahosted.org/git/?p=autoqa.git;a=tree;f=tests tests/ dir of the autoqa source tree].


{{admon/important| Start with a test | Before considering integrating a test into [[AutoQA]] or [[Autotest]], create a '''working''' test.  Creating a working test should not require knowledge of autotest or autoqa.  This page is outlines the process of integrating an existing test into [[AutoQA]].}}
{{admon/important| Start with a test | Before considering integrating a test into [[AutoQA]] or [[Autotest]], create a '''working''' test.  Creating a working test should not require knowledge of autotest or autoqa.  This page outlines the process of integrating an existing test into [[AutoQA]].}}


== Write test code first ==
== Write test code first ==
Line 15: Line 15:
== The test directory ==
== The test directory ==


Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine.
Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are acceptable.


Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.  
Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.  


Next, copy template files from the <code>autoqa/doc</code> <code>control.template</code>, <code>test_class.py.template</code>, and <code>control.autoqa.template</code> into your test dir. Rename them to <code>control</code>, <code>[testname].py</code> and <code>test_class.py.template</code>.
Next, from the directory {{filename|autoqa/doc/}}, copy template files {{filename|control.template}}, {{filename|control.autoqa.template}} and {{filename|test_class.py.template}} into your test directory. Rename them to {{filename|control}}, {{filename|control.autoqa}} and {{filename|[testname].py}}, respectively.


== The control file ==
== The {{filename|control}} file ==


The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from [[AutoQA]], and so on. Here's an example control file:
The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from [[AutoQA]], and so on. Here's an example control file:
Line 40: Line 40:
job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
</pre>
</pre>
<b><code>autoqa_conf</code></b> variable contains string with autoqa.conf file. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.
<b><code>autoqa_args</code></b> is a dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in <code>hooks/[hookname]/README</code> files.


{{admon/important| FIXME | Append some real control file to show 'how it looks'}}
{{admon/important| FIXME | Append some real control file to show 'how it looks'}}
Line 49: Line 45:
=== Required data ===
=== Required data ===


The following control file items are required for valid AutoQA tests:
The following control file items are required for valid AutoQA tests. The first three are important for us, the rest is not so important but still required.


* '''NAME''': The name of the test. Should match the test directory name, the test object name, etc.
* '''AUTHOR''': Your name and email address.
* '''AUTHOR''': Your name and email address.
* '''DOC''': A verbose description of the test - its purpose, the logs and data it will generate, and so on.
* '''TIME''': either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
* '''TIME''': either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
* '''NAME''': The name of the test. Should match the test directory name, the test object name, etc.
* '''DOC''': A verbose description of the test - its purpose, the logs and data it will generate, and so on.
* '''TEST_TYPE''': either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
* '''TEST_TYPE''': either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
* '''TEST_CLASS''': This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
* '''TEST_CLASS''': This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
Line 60: Line 56:


=== Optional data ===
=== Optional data ===
 
The following control file items are optional, and infrequently used, for AutoQA tests.
<pre>
<pre>
DEPENDENCIES = 'POWER, CONSOLE'
DEPENDENCIES = 'POWER, CONSOLE'
Line 72: Line 68:
Most tests will have a line in the control file like this:
Most tests will have a line in the control file like this:
<pre>job.run_test('conflicts', config=autoqa_conf, **autoqa_args)</pre>
<pre>job.run_test('conflicts', config=autoqa_conf, **autoqa_args)</pre>
This will create a 'conflicts' ''test object'' (see below) and pass along the given variables.
This will create a 'conflicts' ''test object'' (see below) and pass along the following variables
 
; <code>autoqa_conf</code>
: Contains string with autoqa.conf file, usually located at <code>/etc/autoqa/autoqa.conf</code>. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.
 
; <code>autoqa_args</code>
: A dictionary, containing all the hook-specific variables (e.g. ''kojitag'' for post-koji-build hook). Documentation on these is to be found in <code>hooks/[hookname]/README</code> files. Some more variables may be also present, as described in the template file.


Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.


=== Control files are python scripts ===
== The {{filename|control.autoqa}} file ==
The {{filename|control.autoqa}} file allows a test to define any scheduling requirements or modify input arguments.  This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on.  It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.
 
All variables available in {{filename|control.autoqa}} are documented in [http://git.fedorahosted.org/git/?p=autoqa.git;a=blob_plain;f=doc/control.autoqa.template;hb=HEAD {{filename|doc/control.autoqa.template}}]. You can override them to customize your test's scheduling.  Basically you can influence:
* Which event (i.e. hook) the test runs for and under which conditions.
* The type of system the test needs.  This includes system architecture, operating system version and whether the system supports virtualization (see [[Managing autotest labels|autotest labels]] for additional information)
* Data passed from the hook to the test object.


{{admon/important| FIXME | Deprecated?}}
Here is example {{filename|control.autoqa}} file:
<pre>
# this test can be run just once and on any architecture,
# override the default set of architectures
archs = ['noarch']


The control file is actually interpreted as a Python script. So you ''can'' do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.
# this test may be destructive, let's require a virtual machine for it
labels = ['virt']


Before it reads the control file, Autotest imports all the symbols from the <code>autotest_lib.client.bin.util</code> module.<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/bin/job.py#L19</ref> This means the control files can use any function defined in <code>common_lib.utils</code> or <code>bin.base_utils</code><ref>http://autotest.kernel.org/browser/branches/0.10.1/client/bin/utils.py</ref>. This lets you do things like:
# we want to run this test just for post-koji-build hook
<pre>arch = get_arch()
if hook not in ['post-koji-build']:
baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch)
    execute = False
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)</pre>
</pre>
since <code>get_arch</code> is defined in <code>common_lib.utils</code>.


=== Control.autoqa files ===
Similar to the {{filename|control}} file, the {{filename|control.autoqa}} file is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there.  However, it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.
{{admon/important| FIXME | insert some description - kparal?}}


== Test Objects ==
== Test Object ==


The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to [[Autotest]] (and other places).  
The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to [[Autotest]] (and other places).  
Line 107: Line 118:
</pre>
</pre>


The name of the class ''must'' match the name given in the <code>run_test()</code> line of the control file, and test classes must be subclasses of the <code>AutoQATest</code> class. But don't worry too much about how this works - the test_class.py.template contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.
The name of the class ''must'' match the name given in the <code>run_test()</code> line of the control file, and test classes must be subclasses of the <code>AutoQATest</code> class. But don't worry too much about how this works - the <code>test_class.py.template</code> contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.


=== AutoQATest base class ===
=== AutoQATest base class ===


This class contains the functionality common to all the tests - i.e. it initializes the variables used for storing results in its __init__ function. The default initialize method then parses the config string passed in the control file into self.config, and prepares self.autotest_url - a url pointing to the autotest storage place, where all the logs will be once the test finishes.
This class contains the functionality common to all the tests - i.e. it initializes the variables used for storing results in its <code>__init__</code> function. The default initialize method then parses the config string passed in the control file into <code>self.config</code>, and prepares <code>self.autotest_url</code> - a url pointing to the autotest storage place, where all the logs will be once the test finishes.


It also contains a postprocess_iteration method, which uses the self.{result, summary, highlights, outputs} to send a pretty formated email to autoqa-results mailing list.
It also contains a <code>postprocess_iteration</code> method, which uses the <code>self.result</code>, <code>self.summary</code>, <code>self.highlights</code> and <code>self.outputs</code> to send a pretty formatted email to [https://fedorahosted.org/mailman/listinfo/autoqa-results autoqa-results mailing list].


{{admon/important| Internal result variables | Please make sure, that you DO use the self.{result, summary, highlights, outputs} while writing tests. These variables make the 'central dispatch' of test results possible, and will be used for future enhancements like ResultsDB.}}
{{admon/important| Saving test results | When writing tests and determining how to record test results, please make sure to use the variables <code>self.result</code>, <code>self.summary</code>, <code>self.highlights</code> or <code>self.outputs</code> when writing tests. These variables make the 'central dispatch' of test results possible, and will be used for future enhancements like ResultsDB.}}


There are two more methods defined - initialize_failed, and run_once failed. These are used by ExceptionCatcher decorator when Exception happens in either initialize or run_once.
The AutoQATest base class defines two additional methods - <code>initialize_failed</code>, and <code>run_once_failed</code>. These are used by the <code>ExceptionCatcher</code> decorator when an exception occurs in either the <code>initialize</code> or <code>run_once</code> methods.


=== ExceptionCatcher decorator ===
=== ExceptionCatcher decorator ===


When Exception is raised during the initialize or run_once, the test immediately ends without calling the postprocess_iteration method, which is supposed to send all the gathered data to mailing list.
When exception is raised during the <code>initialize</code> or <code>run_once</code>, the test immediately ends without calling the <code>postprocess_iteration</code> method, which is supposed to send all the gathered data to mailing list.


This behaviour is, of course, not what one would really want, so here comes the ExceptionCatcher decorator.
This behaviour is, of course, not what one would really want, so here comes the <code>ExceptionCatcher</code> decorator. When an exception is raised, it calls the function passed as an argument to the decorator.
When an Exception is raised, it calls the function passed as an argument to the decorator.


<pre>
<pre>
Line 132: Line 142:
</pre>
</pre>


I.E. if any Exception is thrown in the run_once, self.run_once_failed is called. This method than calls sets the result and summary variables (if unset), and calls postprocess iteration (initialize_failed does the same). Once the run_once_failed finishes, the Exception is re-raised, and the test then ends.
I.e. if any unhandled exception is thrown during execution of <code>run_once</code>, the <code>self.run_once_failed</code> method is called. The <code>self.run_once_failed</code> method sets the result and summary variables (if unset), and calls <code>postprocess_iteration</code> (<code>initialize_failed</code> does the same). Once the <code>run_once_failed</code> method finishes, the exception is re-raised, and the test then ends.


{{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the **kwargs argument in the decorated function. This is because Autotest magic can not find out, what is the correct subset of arguments from **autoqa_args to pass, so it passes them all - which causes error, if you don't have them all listed.}}
{{admon/important| **kwargs parameter | Because of some nasty Autotest magic, it is required to have the <code>**kwargs</code> argument in the decorated function. This is because Autotest magic can not find out, what is the correct subset of arguments from <code>**autoqa_args</code> to pass, so it passes them all - which causes error, if you don't have them all listed.}}


=== Test stages ===
=== Test stages ===
Line 152: Line 162:
==== initialize() ====
==== initialize() ====


This does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server or create some empty result structures. This is an optional method - if you don't need to initialize anything.
This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to parse the autoqa config data provided by the server or to create initial test result data structures. This is an optional method.


All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.
All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.


{{admon/important| Call AutoQATest.initialize| If you re-implement the initialize function, make sure, that you call super(CLASSNAME, self).initialize(config) inside it, so all the required initialization is executed}}
{{admon/important| Call <code>AutoQATest.initialize</code>| If you re-implement the <code>initialize</code> function, make sure, that you call <code>super(CLASSNAME, self).initialize(config)</code> inside it, so all the required initialization is executed}}


==== run_once() ====
==== run_once() ====
Line 177: Line 187:
This will run the command, and store its exit code into the retval variable.
This will run the command, and store its exit code into the retval variable.


Although if you want to 'catch' the output of the command, you can use the system_output command:
Although if you want to 'catch' the output of the command, you can use the <code>system_output</code> command:


<pre>
<pre>
Line 200: Line 210:
</pre>
</pre>


See the section on [[#Useful test object attributes|test object attributes]] for information about <code>self.bindir</code>, <code>self.tmpdir</code>, etc. Also see [[#Getting proper test results|Getting proper test results]] for more information about getting results from your tests.
See the section on [[#Useful test object attributes|test object attributes]] for information about <code>self.bindir</code>, <code>self.tmpdir</code>, etc. Also see [[#Getting test results|Getting test results]] for more information about getting results from your tests.


==== postprocess_iteration() ====
==== postprocess_iteration() ====


This method is implemented in the AutoQATest base class, and it sends the data gathered in the self.{result, summary, highlights, outputs} to the autoqa-results maling list.
This method is implemented in the AutoQATest base class, and it sends the data gathered in the <code>self.result/summary/highlights/outputs</code> to the autoqa-results maling list.


You can of course reimplement this function, if you want to (for instance) gather some extra data, or prepare the data gathered in the test before storing them, but please be sure to call the AutoQATest.postprocess_iteration() afterwards. In general, you should not need to reimplement this function at all.
You can of course reimplement this function, if you want to (for instance) gather some extra data, or prepare the data gathered in the test before storing them, but please be sure to call the <code>AutoQATest.postprocess_iteration()</code> afterwards. In general, you should not need to reimplement this function at all.


<pre>
<pre>
Line 237: Line 247:
=== Getting test results ===
=== Getting test results ===


The <code>AutoQATest</code> class provides a set of variables (self.{result, summary, highlights, outputs}) to be used for storing test results. The point of these, is to be able to have one implementation of the results harness - in the <code>AutoQATest</code> class. At the time being, the results are being sent to the autoqa-results mailing list, but in the near future, we'll be using a database-based storage, which will give us a better way of reviewing the results. Proper usage of abovementioned variables is crucial to the seamless transition to this tool.
The <code>AutoQATest</code> class provides a set of variables (<code>self.result/summary/highlights/outputs</code>) to be used for storing test results. The point of these, is to be able to have one implementation of the results harness - in the <code>AutoQATest</code> class. At the time being, the results are being sent to the autoqa-results mailing list, but in the near future, we'll be using a database-based storage, which will give us a better way of reviewing the results. Proper usage of abovementioned variables is crucial to the seamless transition to this tool.


==== Overall result (self.result) ====
==== Overall result (self.result) ====


Overall result is to be stored in the <code>self.result</code> variable. You should set it in <code>run_once()</code> according to the result of your test. You can choose from these values:
The overall test result is stored in the variable <code>self.result</code>. You should set it in <code>run_once()</code> according to the result of your test. You can choose from these values:


* 'PASSED'
* 'PASSED'
Line 249: Line 259:
* 'NEEDS_INSPECTION'
* 'NEEDS_INSPECTION'


If you don't set the <code>self.result</code>, it's automatically set-up to <code>"NEED_INSPECTION"</code> in the <code>postprocess_iteration()</code>.
If no value is set in <code>self.result</code>, a value of <code>"NEED_INSPECTION"</code> will be used during <code>postprocess_iteration()</code>.


If an exception happens, and is catched by ExceptionCatcher decorator, self.result is set to "CRASHED" (if not set to some value already from inside the <code>run_once()</code>.
If an exception occurs, and is caught by the <code>ExceptionCatcher</code> decorator, <code>self.result</code> is set to <code>'CRASHED'</code> (if not already set from inside the <code>run_once()</code> method).


{{admon/note| TIP| The best approach, to setting-up the <code>self.result</code> is to leave the "PASSED" to be set as the last line (or one of the last lines) in the test - once you are really sure, everything really went OK. The other results ('FAILED', 'CRASHED', ...) are best to be set at the moment you find it out to be the proper result.}}
{{admon/note| When to note the result | It is recommended that you reserve setting <code>self.result{{=}}'PASSED'</code> to the very end of the <code>run_once()</code> method. The other test results (such as <code>'FAILED'</code>, <code>'CRASHED'</code>, etc...) are best set at the moment you discover a problem.}}


==== Summary ====
==== Summary ====


The <code>self.summary</code> is used as a "$SUBJ" for the purposes of the autoqa-results mailing list. It is ment to contain a short summary of the testrun - e.g. for Conflicts test, it can be "69 packages with file conflicts in rawhide-i386". So basically, it should be a short string describing the outcome of the test.
The <code>self.summary</code> is used as a "$SUBJ" for the purposes of the autoqa-results mailing list. It is ment to contain a short summary of the testrun - e.g. for Conflicts test, it can be ''"69 packages with file conflicts in rawhide-i386"''. So basically, it should be a short string describing the outcome of the test.


<code>postprocess_iteration()</code> then Adds the name of the test, and self.result, so the whole summary (as used for the mailing list autoqa-results) would be "Conflicts: FAILED;69 packages with file conflicts in rawhide-i386".
<code>postprocess_iteration()</code> then adds the name of the test and <code>self.result</code>, so the whole summary (as used for the mailing list autoqa-results) would be ''"Conflicts: FAILED;69 packages with file conflicts in rawhide-i386"''.


==== Highlights ====
==== Highlights ====
Line 269: Line 279:
==== Commands output ====
==== Commands output ====


It is usually a good idea to log stdout/stderr of the commands you run in your <code>run_once()</code>. You should store these in the self.outputs variable, if you want it to be stored for further use.
It is usually a good idea to log stdout/stderr of the commands you run in your <code>run_once()</code>. You should store these in the <code>self.outputs</code> variable, if you want it to be stored for further use.
{{admon/note| Note| At the moment, all the logs (stder/stdout...) are automagically harvested and stored by Autotest, so you don't really need to worry about it. It's just always a good idea to store it also in the self.outputs, as this variables value will be stored in ResultsDB, once it's up'n'running.}}
{{admon/note| Note| At the moment, all the logs (stder/stdout...) are automagically harvested and stored by Autotest, so you don't really need to worry about it. It's just always a good idea to store it also in the <code>self.outputs</code>, as this variables value will be stored in ResultsDB, once it's up'n'running.}}
 


=== Log files and scratch data ===
=== Log files and scratch data ===
Line 365: Line 374:
* [http://autotest.kernel.org/wiki/AddingTest AddingTest (covers the basics of test writing)]
* [http://autotest.kernel.org/wiki/AddingTest AddingTest (covers the basics of test writing)]
* [http://autotest.kernel.org/wiki/AutotestApi AutotestApi (more detailed info about writing tests)]
* [http://autotest.kernel.org/wiki/AutotestApi AutotestApi (more detailed info about writing tests)]
[[Category:AutoQA]]

Latest revision as of 08:43, 19 November 2010

QA.png


Warning.png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Introduction

Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.

Important.png
Start with a test
Before considering integrating a test into AutoQA or Autotest, create a working test. Creating a working test should not require knowledge of autotest or autoqa. This page outlines the process of integrating an existing test into AutoQA.

Write test code first

I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.

You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.

The test directory

Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are acceptable.

Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.

Next, from the directory autoqa/doc/, copy template files control.template, control.autoqa.template and test_class.py.template into your test directory. Rename them to control, control.autoqa and [testname].py, respectively.

The control file

The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:

control file for conflicts test

AUTHOR = "Will Woods <wwoods@redhat.com>"
TIME="SHORT"
NAME = 'conflict'
DOC = """
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
"""
TEST_TYPE = 'CLIENT'
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)
Important.png
FIXME
Append some real control file to show 'how it looks'

Required data

The following control file items are required for valid AutoQA tests. The first three are important for us, the rest is not so important but still required.

  • NAME: The name of the test. Should match the test directory name, the test object name, etc.
  • AUTHOR: Your name and email address.
  • DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
  • TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
  • TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
  • TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
  • TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.

Optional data

The following control file items are optional, and infrequently used, for AutoQA tests.

DEPENDENCIES = 'POWER, CONSOLE'
SYNC_COUNT = 1
  • DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
  • SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.

Launching the test object

Most tests will have a line in the control file like this:

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)

This will create a 'conflicts' test object (see below) and pass along the following variables.

autoqa_conf
Contains string with autoqa.conf file, usually located at /etc/autoqa/autoqa.conf. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.
autoqa_args
A dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in hooks/[hookname]/README files. Some more variables may be also present, as described in the template file.

Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.

The control.autoqa file

The control.autoqa file allows a test to define any scheduling requirements or modify input arguments. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.

All variables available in control.autoqa are documented in doc/control.autoqa.template. You can override them to customize your test's scheduling. Basically you can influence:

  • Which event (i.e. hook) the test runs for and under which conditions.
  • The type of system the test needs. This includes system architecture, operating system version and whether the system supports virtualization (see autotest labels for additional information)
  • Data passed from the hook to the test object.

Here is example control.autoqa file:

# this test can be run just once and on any architecture,
# override the default set of architectures
archs = ['noarch']

# this test may be destructive, let's require a virtual machine for it
labels = ['virt']

# we want to run this test just for post-koji-build hook
if hook not in ['post-koji-build']:
    execute = False

Similar to the control file, the control.autoqa file is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. However, it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.

Test Object

The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts test contains a file named conflicts.py, which defines a conflicts class, as follows:

import autoqa.util
from autoqa.test import AutoQATest
from autoqa.decorators import ExceptionCatcher
from autotest_lib.client.bin import utils

class conflicts(AutoQATest):
    ...

The name of the class must match the name given in the run_test() line of the control file, and test classes must be subclasses of the AutoQATest class. But don't worry too much about how this works - the test_class.py.template contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.

AutoQATest base class

This class contains the functionality common to all the tests - i.e. it initializes the variables used for storing results in its __init__ function. The default initialize method then parses the config string passed in the control file into self.config, and prepares self.autotest_url - a url pointing to the autotest storage place, where all the logs will be once the test finishes.

It also contains a postprocess_iteration method, which uses the self.result, self.summary, self.highlights and self.outputs to send a pretty formatted email to autoqa-results mailing list.

Important.png
Saving test results
When writing tests and determining how to record test results, please make sure to use the variables self.result, self.summary, self.highlights or self.outputs when writing tests. These variables make the 'central dispatch' of test results possible, and will be used for future enhancements like ResultsDB.

The AutoQATest base class defines two additional methods - initialize_failed, and run_once_failed. These are used by the ExceptionCatcher decorator when an exception occurs in either the initialize or run_once methods.

ExceptionCatcher decorator

When exception is raised during the initialize or run_once, the test immediately ends without calling the postprocess_iteration method, which is supposed to send all the gathered data to mailing list.

This behaviour is, of course, not what one would really want, so here comes the ExceptionCatcher decorator. When an exception is raised, it calls the function passed as an argument to the decorator.

@ExceptionCatcher("self.run_once_failed")
def run_once(self, **kwargs):
    ...

I.e. if any unhandled exception is thrown during execution of run_once, the self.run_once_failed method is called. The self.run_once_failed method sets the result and summary variables (if unset), and calls postprocess_iteration (initialize_failed does the same). Once the run_once_failed method finishes, the exception is re-raised, and the test then ends.

Important.png
**kwargs parameter
Because of some nasty Autotest magic, it is required to have the **kwargs argument in the decorated function. This is because Autotest magic can not find out, what is the correct subset of arguments from **autoqa_args to pass, so it passes them all - which causes error, if you don't have them all listed.

Test stages

setup()

This is an optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:

    def setup(self):
        utils.system('yum -y install httpd')
        if utils.system('service httpd status') != 0:
            utils.system('service httpd start')


initialize()

This does any pre-test initialization that needs to happen. AutoQA tests typically uses this method to parse the autoqa config data provided by the server or to create initial test result data structures. This is an optional method.

All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.

Important.png
Call AutoQATest.initialize
If you re-implement the initialize function, make sure, that you call super(CLASSNAME, self).initialize(config) inside it, so all the required initialization is executed

run_once()

This is where the test code actually gets run. It's the only required method for your test object.

In short, this method should build the argument list and run the test binary, like so:

    @ExceptionCatcher("self.run_once_failed")
    def run_once(self, baseurl, parents, reponame, **kwargs):
        os.chdir(self.bindir)
        cmd = "./sanity.py --scratchdir %s --logdir %s" % (self.tmpdir, self.resultsdir)
        cmd += " %s" % baseurl
        retval = utils.system(cmd)
        if retval != 0:
            raise error.TestFail

This will run the command, and store its exit code into the retval variable.

Although if you want to 'catch' the output of the command, you can use the system_output command:

from autotest_lib.client.common_lib import error

    ...
    try:  
        output = utils.system_output(cmd, retain_output = True)
    except error.CmdError, e:
        output = e.result_obj.stdout
    ...

Or if you want to have both exit code, and command output then try this out:

    ...
    result = utils.run(cmd, ignore_status = True, stdout_tee = utils.TEE_TO_LOGS)
    output = result.stdout
    retval = result.exit_status
    ...

See the section on test object attributes for information about self.bindir, self.tmpdir, etc. Also see Getting test results for more information about getting results from your tests.

postprocess_iteration()

This method is implemented in the AutoQATest base class, and it sends the data gathered in the self.result/summary/highlights/outputs to the autoqa-results maling list.

You can of course reimplement this function, if you want to (for instance) gather some extra data, or prepare the data gathered in the test before storing them, but please be sure to call the AutoQATest.postprocess_iteration() afterwards. In general, you should not need to reimplement this function at all.

    def postprocess_iteration(self):
        for line in self.outputs:
            if line.startswith('Max transfer speed: '):
                (dummy, max_speed) = line.split('speed: ')
        keyval['max_speed'] = max_speed
        self.write_test_keyval(keyval)

        super(CLASSNAME, self).postprocess_iteration()

(See Returning extra data for details about write_test_keyval.)

This method will be run after each iteration of run_once(), but note that it gets no arguments passed in. Any data you want from the test run needs to be saved into the test object - hence the use of self.outputs.

Useful test object attributes

test objects have the following attributes available[1]:

outputdir       eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir         eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src             eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>

Getting test results

The AutoQATest class provides a set of variables (self.result/summary/highlights/outputs) to be used for storing test results. The point of these, is to be able to have one implementation of the results harness - in the AutoQATest class. At the time being, the results are being sent to the autoqa-results mailing list, but in the near future, we'll be using a database-based storage, which will give us a better way of reviewing the results. Proper usage of abovementioned variables is crucial to the seamless transition to this tool.

Overall result (self.result)

The overall test result is stored in the variable self.result. You should set it in run_once() according to the result of your test. You can choose from these values:

  • 'PASSED'
  • 'FAILED'
  • 'ABORTED'
  • 'CRASHED'
  • 'NEEDS_INSPECTION'

If no value is set in self.result, a value of "NEED_INSPECTION" will be used during postprocess_iteration().

If an exception occurs, and is caught by the ExceptionCatcher decorator, self.result is set to 'CRASHED' (if not already set from inside the run_once() method).

Note.png
When to note the result
It is recommended that you reserve setting self.result='PASSED' to the very end of the run_once() method. The other test results (such as 'FAILED', 'CRASHED', etc...) are best set at the moment you discover a problem.

Summary

The self.summary is used as a "$SUBJ" for the purposes of the autoqa-results mailing list. It is ment to contain a short summary of the testrun - e.g. for Conflicts test, it can be "69 packages with file conflicts in rawhide-i386". So basically, it should be a short string describing the outcome of the test.

postprocess_iteration() then adds the name of the test and self.result, so the whole summary (as used for the mailing list autoqa-results) would be "Conflicts: FAILED;69 packages with file conflicts in rawhide-i386".

Highlights

The self.highlights should contain a digest from the stdout/stderr generated by your test. Maybe selecting the important warnings/errors would be a good idea.

This digest will be at the beginning of the report in the autoqa-results mailing list.

Commands output

It is usually a good idea to log stdout/stderr of the commands you run in your run_once(). You should store these in the self.outputs variable, if you want it to be stored for further use.

Note.png
Note
At the moment, all the logs (stder/stdout...) are automagically harvested and stored by Autotest, so you don't really need to worry about it. It's just always a good idea to store it also in the self.outputs, as this variables value will be stored in ResultsDB, once it's up'n'running.

Log files and scratch data

Any files written to self.resultsdir will be saved at the end of the test. Anything written to self.tmpdir will be discarded.

Returning extra data

Further test-level info can be returned by using test.write_test_keyval(dict):

extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)
  • For per-iteration data (performance numbers, etc) there are three methods:
    • Just attr: test.write_attr_keyval(attr_dict)
      • Test attributes are limited to 100 characters.[2]
    • Just perf: test.write_perf_keyval(perf_dict)
      • Performance values must be floating-point numbers.
    • Both: test.write_iteration_keyval(attr_dict, perf_dict)

How to run AutoQA tests

Install AutoQA from GIT

First of all, you'll need to checkout some version from GIT. You can either use master, or some tagged 'release'.

To checkout master branch:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa

To checkout tagged release:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa
git tag -l
# now you'll get a list of tags, at the time of writing this document, the latests tag was v0.3.5-1
git checkout -b v0.3.5-1 tags/v0.3.5-1

Add your test

The best way to add your test into the directory structure is to create a new branch, copy your test and make install autoqa.

git checkout -b my_new_awesome_test
cp -r /path/to/directory/with/your/test ./tests
make clean install
Note.png
Dependencies
It's possible, that make install will fail due to missing some python modules (e.g. turbogears2), in that case, install those using yum

Run your test

This is dependent on the hook, your test is supposed to run under. Let's assume, that it is the post-koji-build.

/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run

This command will show you current koji builds e.g.

No previous run - checking builds in the past 3 hours
autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --name kdemultimedia --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --name kdeplasma-addons --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --name cryptopp --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --name drupal --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --name seamonkey --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
... output trimmed ...

So to run your test, just select one of the lines, and add parameters --test name_of_your_test --local, which will locally execute the test you just wrote. If you wanted to run rpmlint, for example, the command would be

autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
Important.png
--local
It is important to add the --local parameter. If you won't, the test will fail to run, since you don't have autotest server present.

References

Links

Autotest documentation