Verifying AutoQA tests

From FedoraProject

Revision as of 14:58, 4 January 2010 by Kparal (Talk | contribs)

Jump to: navigation, search
QA.png


Warning (medium size).png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Contents

Introduction

This guide presumes that you have a new AutoQA test written (preferably according to Writing AutoQA Tests) and you want to verify that it works ok. This article will show you how to do it.

Prerequisites

First, install Package-x-generic-16.pngautotest-client and Package-x-generic-16.pngautoqa. These packages provide a local test harness and the autoqa hooks, watchers and python libraries needed to verify a test is functioning properly.

Stop (medium size).png
Light fuse, get away...
Before validating the new test on your local system, you may want to confirm that the test does not perform destructive operations on the system, or could fail in such a way that would render your local system inoperable. Consider using Virtualization when verifying your test.

In place of autotest-client, you may choose to install and configure autotest server. Setting up autotest server is a more involved process, and only needed if your tests require coordination between multiple test systems.

Examine the watcher

When you have the test ready, you have already chosen the right hook (meaning event type) for your test and added your test name to the testlist file of that hook. Now we need to simulate running the hook's watcher on AutoQA server to see what commands would be run. We can do that by adding --dry-run (use --help to see more of useful options).

Let's say our test uses post-koji-build hook, which announces every package built and tagged with dist-fX-updates-candidate tag in Koji. So we would run:

$ /usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run
No previous run - checking builds in the past 3 hours
autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --name kdemultimedia --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --name kdeplasma-addons --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --name cryptopp --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --name drupal --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --name seamonkey --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
### output trimmed ###

For every line, all tests from post-koji-build hook (specified in testlist file) would be run on all the architectures specified by --arch option. For our purposes we will pick one command, let's say the first one.

Examine the control file

We will now try what would happen if the chosen command would be actually run. By appending --dry-run option to the command the autoqa harness will prepare everything needed for autotest harness and print what would be run, but not execute it. Let's see what happens:

$ autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --dry-run
/usr/share/autotest/client/bin/autotest --verbose -t post-koji-build:rpmlint.x86_64 /tmp/autoqa-control.y_ODy7
keeping /tmp/autoqa-control.y_ODy7 at user request
/usr/share/autotest/client/bin/autotest --verbose -t post-koji-build:rpmguard.x86_64 /tmp/autoqa-control.rjzqJ_
keeping /tmp/autoqa-control.rjzqJ_ at user request

There are two lines saying that autotest would be run with a particular control file. There are two of them because we asked for testing on two different architectures. Those control files were kept on disk for our examination. Pick one of them and display it. You should see something like this:

# -*- coding: utf-8 -*-

autoqa_conf = '''
### output trimmed ###
'''

kojitag='dist-f12-updates-candidate'
nvr='espeak-1.42.04-1.fc12'
name='espeak'

### output trimmed ###

job.run_test('rpmlint', name=name, nvr=nvr, kojitag=kojitag, config=autoqa_conf)

It's almost the same config file that you created, but on top some more data are added. The autoqa_conf line is your configuration file from /etc/autoqa.conf. After that there are some other properties (kojitag, nvr and name in this case) that were set by the hook according to the command line. At the end you finally see how your test object will be invoked.

You now have the final control file, so you can easily check if all the arguments of job.run_test method are correctly set and whether your test will be correctly executed.

If everything looks fine, we can continue on actually running the test.

Run just your test

Now we will run our test for real. But we don't want to run all tests of that post-koji-build hook on it, just our one. Suppose we are writing test named rpmlint (already present in the AutoQA by the way). We will modify the command to look like this:

autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint

If you don't have autotest-server installed and configured, you will also need to append --local option or set local = true in /etc/autoqa.conf to run the test on the local computer.

Let's see the output:

$ autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
03:56:50 INFO | Writing results to /usr/share/autotest/client/results/post-koji-build:rpmlint.x86_64
### output trimmed ###
03:57:05 INFO | Test started. Number of iterations: 1
03:57:05 INFO | Executing iteration 1 of 1
03:57:07 INFO | Saving RPMs to /usr/share/autotest/client/tmp/tmpF8nOnN_rpmlint/rpms
03:57:07 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-devel-1.42.04-1.fc12.i686.rpm
03:57:08 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/i686/espeak-1.42.04-1.fc12.i686.rpm
03:57:16 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/ppc64/espeak-devel-1.42.04-1.fc12.ppc64.rpm
03:57:17 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/ppc64/espeak-1.42.04-1.fc12.ppc64.rpm
### output trimmed ###
03:57:30 INFO | Grabbing http://koji.fedoraproject.org/packages/espeak/1.42.04/1.fc12/src/espeak-1.42.04-1.fc12.src.rpm
03:57:45 DEBUG| Running 'rpmlint /usr/share/autotest/client/tmp/tmpF8nOnN_rpmlint/rpms 2>&1'
03:57:46 DEBUG| espeak.ppc: I: enchant-dictionary-not-found en_US
03:57:46 DEBUG| espeak.ppc: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
03:57:46 DEBUG| espeak-devel.i686: W: no-documentation
03:57:46 DEBUG| espeak-devel.x86_64: W: no-documentation
03:57:47 DEBUG| espeak.x86_64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.2.5
03:57:47 DEBUG| espeak-devel.ppc64: W: no-documentation
03:57:47 DEBUG| espeak.i686: W: shared-lib-calls-exit /usr/lib/libespeak.so.1.1.42 exit@GLIBC_2.0
03:57:47 DEBUG| espeak.ppc64: W: shared-lib-calls-exit /usr/lib64/libespeak.so.1.1.42 exit@GLIBC_2.3
03:57:47 DEBUG| espeak-devel.ppc: W: no-documentation
03:57:47 DEBUG| 9 packages and 0 specfiles checked; 0 errors, 8 warnings.
03:57:47 INFO | Test finished after 1 iterations.
### output trimmed ###
03:57:49 INFO | END GOOD	----	----	timestamp=1261126669	localtime=Dec 18 03:57:49	
### output trimmed ###

You can see that the test went well and you can see rpmlint's output there. You can also find all the output logged at /usr/share/autotest/client/results/post-koji-build:rpmlint.x86_64 (in this case). The most important results that you have written in self.results in the test object are available in the same directory as rpmlint/results/rpmlint.log (in this case).

In case there was some problem in your test the exception in the output should guide you where to look for source of the problem.

Test thoroughly

Now that you verified that your test works ok under one event (e.g. new package built), you should verify that one more events. Just go through the list of commands the watcher gave you and try one command after another. Still everything works ok? Then you test may be ready for publishing in AutoQA upstream, congratulations :)

Remarks

Init scripts

When you execute your test using autotest-client, it adds a few init scripts:

 10:57:37 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/init.d/autotest'
 10:57:37 DEBUG| Running 'ln -sf /usr/share/autotest/client/tools/autotest /etc/rc5.d/S99autotest'

You might be interested in this information particularly when testing on bare metal. You don't have to be concerned though. The purpose of this script is to continue execution of previously stopped test, eg. when some test requires computer reboot. In that case a file control.state exists and autotest will continue with test execution. In other (the majority) of cases, this script will just do nothing.

TODOs

  • Under which user should autotest and autoqa be run?