From Fedora Project Wiki

(Minor rewording to emphasize using autotest-client first)
(arch now provided by autoqa (#272))
 
(11 intermediate revisions by 3 users not shown)
Line 8: Line 8:
== Prerequisites ==
== Prerequisites ==


First, install {{package|autotest-client}} and {{package|autoqa}}. These packages provide a local test harness, and the autoqa hooks, watchers and python libraries.
You must have [[Install and configure AutoQA|AutoQA]] installed. You don't need [[Install and configure autotest|Autotest server]] as long as you don't do some multi-hosts tests.


{{admon/caution|Light fuse, get away...|Before validating the new test on your local system, you may want to confirm that the test does not perform destructive operations on the system, or could fail in such a way that would render your local system inoperable.  Consider using [[Virtualization]] when verifying your test.}}
{{admon/caution|Don't trust the code... go virtual|Before validating the new test on your local system, you may want to confirm that the test does not perform destructive operations on the system, or could fail in such a way that would render your local system inoperable.  Consider using [[Virtualization]] when verifying your test.}}
 
In place of ''autotest-client'', you may choose to [[Install and configure autotest|install and configure autotest server]].  Setting up ''autotest server'' is a more involved process, and only required if your tests require coordination between multiple test systems.


== Examine the watcher ==
== Examine the watcher ==


When you have the test ready, you have already chosen the right hook (meaning event type) for your test and added your test name to the ''testlist'' file of that hook. Now we need to simulate running the hook's watcher on AutoQA server to see what commands would be run. We can do that by adding ''--dry-run'' (use ''--help'' to see more of useful options).
When you have the test ready, you have already chosen the right event for your test and configured it in {{filename|control.autoqa}} file. Now we need to simulate running the event's watcher on AutoQA server to see what commands would be run. We can do that by adding ''--dry-run'' (use ''--help'' to see more of useful options).


Let's say our test uses ''post-koji-build'' hook, which announces every package built and tagged with ''dist-fX-updates-candidate'' tag in [[Koji]]. So we would run:
Let's say our test uses ''post-koji-build'' event, which announces every package built and tagged with ''dist-fX-updates-candidate'' tag in [[Koji]]. So we would run:
<pre>
<pre>
$ /usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
# /usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
No previous run - checking builds in the past 3 hours
No previous run - checking builds in the past 3 hours
autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --name kdemultimedia --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --name kdeplasma-addons --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --name cryptopp --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --name drupal --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --name seamonkey --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
### output trimmed ###
... output trimmed ...
</pre>
</pre>


For every line, all tests from post-koji-build hook (specified in ''testlist'' file) would be run on all the architectures specified by ''--arch'' option. For our purposes we will pick one command, let's say the first one.
For every line, all tests from post-koji-build event (specified in ''testlist'' file) would be run on all the architectures specified by ''--arch'' option. For our purposes we will pick one command, let's say the first one.


== Examine the control file ==
== Examine the control file ==
Line 38: Line 36:


<pre>
<pre>
$ autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --dry-run
/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.HCkOS6 post-koji-build:rpmguard.noarch
/usr/share/autotest/client/bin/autotest --verbose -t post-koji-build:rpmlint.x86_64 /tmp/autoqa-control.y_ODy7
keeping /tmp/autoqa-control.HCkOS6 at user request
keeping /tmp/autoqa-control.y_ODy7 at user request
/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.tvUgpL post-koji-build:rpmlint.noarch
/usr/share/autotest/client/bin/autotest --verbose -t post-koji-build:rpmguard.x86_64 /tmp/autoqa-control.rjzqJ_
keeping /tmp/autoqa-control.tvUgpL at user request
keeping /tmp/autoqa-control.rjzqJ_ at user request
</pre>
</pre>


There are two lines saying that autotest would be run with a particular control file. There are two of them because we asked for testing on two different architectures. Those control files were kept on disk for our examination. Pick one of them and display it. You should see something like this:
There are two lines saying that autotest would be run with a particular control file. There are two of them because two tests would be executed. Those control files were kept on disk for our examination. Pick one of them and display it. You should see something like this:


<pre>
<pre>
Line 51: Line 48:


autoqa_conf = '''
autoqa_conf = '''
### output trimmed ###
... output trimmed ...
'''
'''


kojitag='dist-f12-updates-candidate'
autoqa_args = {'arch': 'x86_64', 'kojitag': 'dist-f12-updates-candidate', 'event': 'post-koji-build', 'name': 'espeak', 'nvr': 'espeak-1.42.04-1.fc12'}
nvr='espeak-1.42.04-1.fc12'
name='espeak'


### output trimmed ###
... output trimmed ...


job.run_test('rpmlint', name=name, nvr=nvr, kojitag=kojitag, config=autoqa_conf)
job.run_test('rpmlint', config=autoqa_conf, **autoqa_args)
</pre>
</pre>


It's almost the same ''config'' file that you created, but on top some more data are added. The ''autoqa_conf'' line is your configuration file from ''/etc/autoqa.conf''. After that there are some other properties (''kojitag'', ''nvr'' and ''name'' in this case) that were set by the hook according to the command line. At the end you finally see how your test object will be invoked.
It's almost the same ''config'' file that you created, but on top some more data are added. The ''autoqa_conf'' line is your configuration file from ''/etc/autoqa.conf''. After that there are some other properties in the ''autoqa_args'' dictionary (''event'', ''kojitag'', ''nvr'' and ''name'' in this case) that were set by the event according to the command line. At the end you finally see how your test object will be invoked.


You now have the final control file, so you can easily check if all the arguments of ''job.run_test'' method are correctly set and whether your test will be correctly executed.
You now have the final control file, so you can easily check if all the arguments of ''job.run_test'' method are correctly set and whether your test will be correctly executed.
Line 71: Line 66:
== Run just your test ==
== Run just your test ==


Now we will run our test for real. But we don't want to run all tests of that post-koji-build hook on it, just our one. Suppose we are writing test named ''rpmlint'' (already present in the AutoQA by the way). We will modify the command to look like this:
Now we will run our test for real. But we don't want to run all tests of that post-koji-build event on it, just our one. Suppose we are writing test named ''rpmlint'' (already present in the AutoQA by the way). We will modify the command to look like this:
<pre>
<pre>
autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint
# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint
</pre>
</pre>


Line 80: Line 75:
Let's see the output:
Let's see the output:
<pre>
<pre>
$ autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
03:56:50 INFO | Writing results to /usr/share/autotest/client/results/post-koji-build:rpmlint.x86_64
16:38:32 INFO | Writing results to /usr/share/autotest/client/results/post-koji-build:rpmlint.noarch
### output trimmed ###
... output trimmed ...
03:57:05 INFO | Test started. Number of iterations: 1
16:38:47 INFO | Test started. Number of iterations: 1
03:57:05 INFO | Executing iteration 1 of 1
16:38:47 INFO | Executing iteration 1 of 1
03:57:07 INFO | Saving RPMs to /usr/share/autotest/client/tmp/tmpF8nOnN_rpmlint/rpms
16:38:47 INFO | Dropping caches between iterations
03:57:07 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-devel-1.42.04-1.fc12.i686.rpm
16:38:47 DEBUG| Running 'sync'
03:57:08 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-1.42.04-1.fc12.i686.rpm
16:38:48 DEBUG| Running 'echo 3 > /proc/sys/vm/drop_caches'
03:57:16 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/ppc64/espeak-devel-1.42.04-1.fc12.ppc64.rpm
16:38:48 INFO | ========================================
03:57:17 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/ppc64/espeak-1.42.04-1.fc12.ppc64.rpm
16:38:48 INFO | espeak-1.42.04-1.fc12
### output trimmed ###
16:38:48 INFO | ========================================
03:57:30 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/src/espeak-1.42.04-1.fc12.src.rpm
16:38:48 INFO | Removing all RPMs from /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
03:57:45 DEBUG| Running 'rpmlint /usr/share/autotest/client/tmp/tmpF8nOnN_rpmlint/rpms 2>&1'
16:38:49 INFO | Saving RPMs to /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
03:57:46 DEBUG| espeak.ppc: I: enchant-dictionary-not-found en_US
16:38:49 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-devel-1.42.04-1.fc12.i686.rpm
03:57:46 DEBUG| espeak.ppc: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
16:38:51 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-1.42.04-1.fc12.i686.rpm
03:57:46 DEBUG| espeak-devel.i686: W: no-documentation
... output trimmed ...
03:57:46 DEBUG| espeak-devel.x86_64: W: no-documentation
16:39:04 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/src/espeak-1.42.04-1.fc12.src.rpm
03:57:47 DEBUG| espeak.x86_64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.2.5
16:39:06 DEBUG| Running 'rpmlint /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms 2>&1'
03:57:47 DEBUG| espeak-devel.ppc64: W: no-documentation
16:39:08 DEBUG| [stdout] espeak.ppc64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
03:57:47 DEBUG| espeak.i686: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
16:39:09 DEBUG| [stdout] espeak.ppc64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.3
03:57:47 DEBUG| espeak.ppc64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.3
16:39:10 DEBUG| [stdout] espeak.ppc: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
03:57:47 DEBUG| espeak-devel.ppc: W: no-documentation
16:39:10 DEBUG| [stdout] espeak.ppc: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
03:57:47 DEBUG| 9 packages and 0 specfiles checked; 0 errors, 8 warnings.
16:39:11 DEBUG| [stdout] espeak-devel.ppc: W: no-documentation
03:57:47 INFO | Test finished after 1 iterations.
16:39:11 DEBUG| [stdout] espeak.i686: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
### output trimmed ###
16:39:12 DEBUG| [stdout] espeak.i686: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
03:57:49 INFO | END GOOD ---- ---- timestamp=1261126669 localtime=Dec 18 03:57:49
16:39:13 DEBUG| [stdout] espeak-devel.ppc64: W: no-documentation
### output trimmed ###
16:39:13 DEBUG| [stdout] espeak-devel.x86_64: W: no-documentation
16:39:14 DEBUG| [stdout] espeak-devel.i686: W: no-documentation
16:39:14 DEBUG| [stdout] espeak.src: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:17 DEBUG| [stdout] espeak.src:48: W: macro-in-comment %patch2
16:39:17 DEBUG| [stdout] espeak.src:70: W: deprecated-grep [u'egrep']
16:39:19 DEBUG| [stdout] espeak.x86_64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:20 DEBUG| [stdout] espeak.x86_64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.2.5
16:39:20 DEBUG| [stdout] 9 packages and 0 specfiles checked; 0 errors, 15 warnings.
16:39:20 INFO | ****************************************
16:39:20 INFO | * RESULT: INFO
16:39:20 INFO | * SUMMARY: rpmlint: INFO; 0 errors, 15 warnings for espeak-1.42.04-1.fc12
16:39:20 INFO | * HIGHLIGHTS: 0 lines
16:39:20 INFO | * OUTPUTS: 19 lines
16:39:20 INFO | * EMAIL RECIPIENTS:
16:39:20 INFO | ****************************************
16:39:20 INFO | Test finished after 1 iterations.
... output trimmed ...
16:39:21 INFO | END GOOD ---- ---- timestamp=1299512361 localtime=Mar 07 16:39:21
... output trimmed ...
</pre>
</pre>


You can see that the test went well and you can see rpmlint's output there. You can also find all the output logged at ''/usr/share/autotest/client/results/post-koji-build:rpmlint.x86_64'' (in this case). The most important results that you have written in ''self.results'' in the [[Writing_AutoQA_Tests#Getting_proper_test_results|test object]] are available in the same directory as ''rpmlint/results/rpmlint.log'' (in this case).
You can see that the test went well and you can see rpmlint's output there. You can also find all the output logged at ''/usr/share/autotest/client/results/post-koji-build:rpmlint.noarch'' (in this case). The most important results that you have written in ''self.results'' in the [[Writing_AutoQA_Tests#Getting_proper_test_results|test object]] are available in the same directory as ''rpmlint/results/output.log'' (in this case).


In case there was some problem in your test the exception in the output should guide you where to look for source of the problem.
In case there was some problem in your test the exception in the output should guide you where to look for source of the problem.
Line 117: Line 130:
Now that you verified that your test works ok under one event (e.g. new package built), you should verify that one more events. Just go through the list of commands the watcher gave you and try one command after another. Still everything works ok? Then you test may be ready for publishing in AutoQA upstream, congratulations :)
Now that you verified that your test works ok under one event (e.g. new package built), you should verify that one more events. Just go through the list of commands the watcher gave you and try one command after another. Still everything works ok? Then you test may be ready for publishing in AutoQA upstream, congratulations :)


== TODOs ==
== Remarks ==
* Under which user should autotest and autoqa be run?
 
* What about init scripts in local mode?
=== Init scripts ===
  10:57:37 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/init.d/autotest'
 
  10:57:37 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/rc5.d/S99autotest'
When you execute your test using {{package|autotest}}, it adds a few init scripts:
* Where are the logs when run with autotest-server?
 
<pre>
16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/init.d/autotest'
16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/rc3.d/S99autotest'
</pre>
 
You might be interested in this information particularly when testing on bare metal. You don't have to be concerned though. The purpose of this script is to continue execution of previously stopped test, eg. when some test requires computer reboot. In that case a file ''control.state'' exists and autotest will continue with test execution. In other (the majority) of cases, this script will just do nothing.


[[Category: AutoQA]]
[[Category: AutoQA]]

Latest revision as of 09:10, 4 April 2011

This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Introduction

This guide presumes that you have a new AutoQA test written (preferably according to Writing AutoQA Tests) and you want to verify that it works ok. This article will show you how to do it.

Prerequisites

You must have AutoQA installed. You don't need Autotest server as long as you don't do some multi-hosts tests.

Don't trust the code... go virtual
Before validating the new test on your local system, you may want to confirm that the test does not perform destructive operations on the system, or could fail in such a way that would render your local system inoperable. Consider using Virtualization when verifying your test.

Examine the watcher

When you have the test ready, you have already chosen the right event for your test and configured it in control.autoqa file. Now we need to simulate running the event's watcher on AutoQA server to see what commands would be run. We can do that by adding --dry-run (use --help to see more of useful options).

Let's say our test uses post-koji-build event, which announces every package built and tagged with dist-fX-updates-candidate tag in Koji. So we would run:

# /usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
No previous run - checking builds in the past 3 hours
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
... output trimmed ...

For every line, all tests from post-koji-build event (specified in testlist file) would be run on all the architectures specified by --arch option. For our purposes we will pick one command, let's say the first one.

Examine the control file

We will now try what would happen if the chosen command would be actually run. By appending --dry-run option to the command the autoqa harness will prepare everything needed for autotest harness and print what would be run, but not execute it. Let's see what happens:

/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.HCkOS6 post-koji-build:rpmguard.noarch
keeping /tmp/autoqa-control.HCkOS6 at user request
/usr/bin/atest job create --reboot_before=never --reboot_after=never -m *x86_64 -f /tmp/autoqa-control.tvUgpL post-koji-build:rpmlint.noarch
keeping /tmp/autoqa-control.tvUgpL at user request

There are two lines saying that autotest would be run with a particular control file. There are two of them because two tests would be executed. Those control files were kept on disk for our examination. Pick one of them and display it. You should see something like this:

# -*- coding: utf-8 -*-

autoqa_conf = '''
... output trimmed ...
'''

autoqa_args = {'arch': 'x86_64', 'kojitag': 'dist-f12-updates-candidate', 'event': 'post-koji-build', 'name': 'espeak', 'nvr': 'espeak-1.42.04-1.fc12'}

... output trimmed ...

job.run_test('rpmlint', config=autoqa_conf, **autoqa_args)

It's almost the same config file that you created, but on top some more data are added. The autoqa_conf line is your configuration file from /etc/autoqa.conf. After that there are some other properties in the autoqa_args dictionary (event, kojitag, nvr and name in this case) that were set by the event according to the command line. At the end you finally see how your test object will be invoked.

You now have the final control file, so you can easily check if all the arguments of job.run_test method are correctly set and whether your test will be correctly executed.

If everything looks fine, we can continue on actually running the test.

Run just your test

Now we will run our test for real. But we don't want to run all tests of that post-koji-build event on it, just our one. Suppose we are writing test named rpmlint (already present in the AutoQA by the way). We will modify the command to look like this:

# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint

If you don't have autotest-server installed and configured, you will also need to append --local option or set local = true in /etc/autoqa.conf to run the test on the local computer.

Let's see the output:

# autoqa post-koji-build --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
16:38:32 INFO | Writing results to /usr/share/autotest/client/results/post-koji-build:rpmlint.noarch
... output trimmed ...
16:38:47 INFO | Test started. Number of iterations: 1
16:38:47 INFO | Executing iteration 1 of 1
16:38:47 INFO | Dropping caches between iterations
16:38:47 DEBUG| Running 'sync'
16:38:48 DEBUG| Running 'echo 3 > /proc/sys/vm/drop_caches'
16:38:48 INFO | ========================================
16:38:48 INFO | espeak-1.42.04-1.fc12
16:38:48 INFO | ========================================
16:38:48 INFO | Removing all RPMs from /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
16:38:49 INFO | Saving RPMs to /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms
16:38:49 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-devel-1.42.04-1.fc12.i686.rpm
16:38:51 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-1.42.04-1.fc12.i686.rpm
... output trimmed ...
16:39:04 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/src/espeak-1.42.04-1.fc12.src.rpm
16:39:06 DEBUG| Running 'rpmlint /usr/share/autotest/client/tmp/tmpRvqHlz_rpmlint/rpms 2>&1'
16:39:08 DEBUG| [stdout] espeak.ppc64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:09 DEBUG| [stdout] espeak.ppc64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.3
16:39:10 DEBUG| [stdout] espeak.ppc: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:10 DEBUG| [stdout] espeak.ppc: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
16:39:11 DEBUG| [stdout] espeak-devel.ppc: W: no-documentation
16:39:11 DEBUG| [stdout] espeak.i686: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:12 DEBUG| [stdout] espeak.i686: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
16:39:13 DEBUG| [stdout] espeak-devel.ppc64: W: no-documentation
16:39:13 DEBUG| [stdout] espeak-devel.x86_64: W: no-documentation
16:39:14 DEBUG| [stdout] espeak-devel.i686: W: no-documentation
16:39:14 DEBUG| [stdout] espeak.src: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:17 DEBUG| [stdout] espeak.src:48: W: macro-in-comment %patch2
16:39:17 DEBUG| [stdout] espeak.src:70: W: deprecated-grep [u'egrep']
16:39:19 DEBUG| [stdout] espeak.x86_64: W: spelling-error %description -l en_US stdin -> stein, stain, stdio
16:39:20 DEBUG| [stdout] espeak.x86_64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.2.5
16:39:20 DEBUG| [stdout] 9 packages and 0 specfiles checked; 0 errors, 15 warnings.
16:39:20 INFO | ****************************************
16:39:20 INFO | * RESULT: INFO
16:39:20 INFO | * SUMMARY: rpmlint: INFO; 0 errors, 15 warnings for espeak-1.42.04-1.fc12
16:39:20 INFO | * HIGHLIGHTS: 0 lines
16:39:20 INFO | * OUTPUTS: 19 lines
16:39:20 INFO | * EMAIL RECIPIENTS: 
16:39:20 INFO | ****************************************
16:39:20 INFO | Test finished after 1 iterations.
... output trimmed ...
16:39:21 INFO | END GOOD	----	----	timestamp=1299512361	localtime=Mar 07 16:39:21	
... output trimmed ...

You can see that the test went well and you can see rpmlint's output there. You can also find all the output logged at /usr/share/autotest/client/results/post-koji-build:rpmlint.noarch (in this case). The most important results that you have written in self.results in the test object are available in the same directory as rpmlint/results/output.log (in this case).

In case there was some problem in your test the exception in the output should guide you where to look for source of the problem.

Test thoroughly

Now that you verified that your test works ok under one event (e.g. new package built), you should verify that one more events. Just go through the list of commands the watcher gave you and try one command after another. Still everything works ok? Then you test may be ready for publishing in AutoQA upstream, congratulations :)

Remarks

Init scripts

When you execute your test using autotest, it adds a few init scripts:

16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/init.d/autotest'
16:38:32 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/rc3.d/S99autotest'

You might be interested in this information particularly when testing on bare metal. You don't have to be concerned though. The purpose of this script is to continue execution of previously stopped test, eg. when some test requires computer reboot. In that case a file control.state exists and autotest will continue with test execution. In other (the majority) of cases, this script will just do nothing.