From Fedora Project Wiki


This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.


This guide presumes that you have a new AutoQA test written (preferably according to Writing AutoQA Tests) and you want to verify that it works ok. This article will show you how to do it.


You must have AutoQA installed. You don't need Autotest server as long as you don't do some multi-hosts tests.

Stop (medium size).png
Don't trust the code... go virtual
Before validating the new test on your local system, you may want to confirm that the test does not perform destructive operations on the system, or could fail in such a way that would render your local system inoperable. Consider using Virtualization when verifying your test.

Examine the watcher

When you have the test ready, you have already chosen the right event for your test and configured it in control.autoqa file. Now we need to simulate running the event's watcher on AutoQA server to see what commands would be run. We can do that by adding --dry-run (use --help to see more of useful options).

Let's say our test uses post-koji-build event, which announces every package built and tagged with dist-fX-updates-candidate tag in Koji. So we would run:

# /usr/share/autoqa/post-koji-build/ --dry-run
No previous run - checking builds in the past 3 hours
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
... output trimmed ...

For every line, all tests from post-koji-build event (specified in testlist file) would be run on all the architectures specified by --arch option. For our purposes we will pick one command, let's say the first one.

Examine the control file

We will now try what would happen if the chosen command would be actually run. By appending --dry-run option to the command the autoqa harness will prepare everything needed for autotest harness and print what would be run, but not execute it. Let's see what happens:

/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.HCkOS6 post-koji-build:rpmguard.noarch
keeping /tmp/autoqa-control.HCkOS6 at user request
/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.tvUgpL post-koji-build:rpmlint.noarch
keeping /tmp/autoqa-control.tvUgpL at user request

There are two lines saying that autotest would be run with a particular control file. There are two of them because two tests would be executed. Those control files were kept on disk for our examination. Pick one of them and display it. You should see something like this:

# -*- coding: utf-8 -*-

autoqa_conf = '''
... output trimmed ...

autoqa_args = {'arch': 'x86_64', 'kojitag': 'dist-f12-updates-candidate', 'event': 'post-koji-build', 'name': 'espeak', 'nvr': 'espeak-1.42.04-1.fc12'}

... output trimmed ...

job.run_test('rpmlint', config=autoqa_conf, **autoqa_args)

It's almost the same config file that you created, but on top some more data are added. The autoqa_conf line is your configuration file from /etc/autoqa.conf. After that there are some other properties in the autoqa_args dictionary (event, kojitag, nvr and name in this case) that were set by the event according to the command line. At the end you finally see how your test object will be invoked.

You now have the final control file, so you can easily check if all the arguments of job.run_test method are correctly set and whether your test will be correctly executed.

If everything looks fine, we can continue on actually running the test.

Run just your test

Now we will run our test for real. But we don't want to run all tests of that post-koji-build event on it, just our one. Suppose we are writing test named rpmlint (already present in the AutoQA by the way). We will modify the command to look like this:

# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint

If you don't have autotest-server installed and configured, you will also need to append --local option or set local = true in /etc/autoqa.conf to run the test on the local computer.

Let's see the output:

# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
16:38:32 INFO | Writing results to /usr/share/autotest/client/results/post-koji-build:rpmlint.noarch
... output trimmed ...
16:38:47 INFO | Test started. Number of iterations: 1
16:38:47 INFO | Executing iteration 1 of 1
16:38:47 INFO | Dropping caches between iterations
16:38:47 DEBUG| Running 'sync'
16:38:48 DEBUG| Running 'echo 3 > /proc/sys/vm/drop_caches'
16:38:48 INFO | ========================================
16:38:48 INFO | espeak-1.42.04-1.fc12
16:38:48 INFO | ========================================
16:38:48 INFO | Removing all RPMs from /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
16:38:49 INFO | Saving RPMs to /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
16:38:49 INFO | Grabbing
16:38:51 INFO | Grabbing
... output trimmed ...
16:39:04 INFO | Grabbing
16:39:06 DEBUG| Running 'rpmlint /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms 2>&1'
16:39:08 DEBUG| [stdout] espeak.ppc64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:09 DEBUG| [stdout] espeak.ppc64: W: shared-lib-calls-exit /usr/lib64/ exit@GLIBC_2.3
16:39:10 DEBUG| [stdout] espeak.ppc: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:10 DEBUG| [stdout] espeak.ppc: W: shared-lib-calls-exit /usr/lib/ exit@GLIBC_2.0
16:39:11 DEBUG| [stdout] espeak-devel.ppc: W: no-documentation
16:39:11 DEBUG| [stdout] espeak.i686: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:12 DEBUG| [stdout] espeak.i686: W: shared-lib-calls-exit /usr/lib/ exit@GLIBC_2.0
16:39:13 DEBUG| [stdout] espeak-devel.ppc64: W: no-documentation
16:39:13 DEBUG| [stdout] espeak-devel.x86_64: W: no-documentation
16:39:14 DEBUG| [stdout] espeak-devel.i686: W: no-documentation
16:39:14 DEBUG| [stdout] espeak.src: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:17 DEBUG| [stdout] espeak.src:48: W: macro-in-comment %patch2
16:39:17 DEBUG| [stdout] espeak.src:70: W: deprecated-grep [u'egrep']
16:39:19 DEBUG| [stdout] espeak.x86_64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:20 DEBUG| [stdout] espeak.x86_64: W: shared-lib-calls-exit /usr/lib64/ exit@GLIBC_2.2.5
16:39:20 DEBUG| [stdout] 9 packages and 0 specfiles checked; 0 errors, 15 warnings.
16:39:20 INFO | ****************************************
16:39:20 INFO | * RESULT: INFO
16:39:20 INFO | * SUMMARY: rpmlint: INFO; 0 errors, 15 warnings for espeak-1.42.04-1.fc12
16:39:20 INFO | * HIGHLIGHTS: 0 lines
16:39:20 INFO | * OUTPUTS: 19 lines
16:39:20 INFO | ****************************************
16:39:20 INFO | Test finished after 1 iterations.
... output trimmed ...
16:39:21 INFO | END GOOD	----	----	timestamp=1299512361	localtime=Mar 07 16:39:21	
... output trimmed ...

You can see that the test went well and you can see rpmlint's output there. You can also find all the output logged at /usr/share/autotest/client/results/post-koji-build:rpmlint.noarch (in this case). The most important results that you have written in self.results in the test object are available in the same directory as rpmlint/results/output.log (in this case).

In case there was some problem in your test the exception in the output should guide you where to look for source of the problem.

Test thoroughly

Now that you verified that your test works ok under one event (e.g. new package built), you should verify that one more events. Just go through the list of commands the watcher gave you and try one command after another. Still everything works ok? Then you test may be ready for publishing in AutoQA upstream, congratulations :)


Init scripts

When you execute your test using Package-x-generic-16.pngautotest, it adds a few init scripts:

16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/init.d/autotest'
16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/rc3.d/S99autotest'

You might be interested in this information particularly when testing on bare metal. You don't have to be concerned though. The purpose of this script is to continue execution of previously stopped test, eg. when some test requires computer reboot. In that case a file control.state exists and autotest will continue with test execution. In other (the majority) of cases, this script will just do nothing.