From Fedora Project Wiki

(Integrated into Writing_AutoQA_Tests)
 
Line 1: Line 1:
== Will's notes (to be integrated into main page) ==
=== Control Files ===
The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do.


Before it reads the control file, Autotest imports all the symbols from the <code>autotest_lib.client.bin.util</code> module.<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/bin/job.py#L19</ref> This means the control files can use any function defined in <code>common_lib.utils</code> or <code>bin.base_utils</code><ref>http://autotest.kernel.org/browser/branches/0.10.1/client/bin/utils.py</ref>. This lets you do things like:
<pre>arch = get_arch()
baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch)
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)</pre>
since <code>get_arch</code> is defined in common_lib.utils.
=== Test Objects: Getting test results ===
First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result is PASS. If it raises <code>error.TestFail</code> or <code>error.TestWarn</code> the test result is FAIL or WARN. Any other exception yields an ERROR result.
For simple tests you can just run the test binary like this:
<pre>self.results = utils.system_output(cmd, retain_output=True)</pre>
If <code>cmd</code> is successful (i.e. it returns an exit status of 0) then <code>utils.system_output()</code> will return the output of the command. Otherwise it will raise <code>error.CmdError</code>, which will immediately end the test.
Some tests always exit successfully, so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:
<pre>output = utils.system_output(cmd, retain_output=True)
if 'FAILED' in output:
    raise error.TestFail
elif 'WARNING' in output:
    raise error.TestWarn
</pre>
=== Test Objects: Returning extra data ===
Further test-level info can be returned by using <code>test.write_test_keyval(dict)</code>:
<pre>
extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)
</pre>
* For per-iteration data (performance numbers, etc) there are three methods:
** Just attr: <code>test.write_attr_keyval(attr_dict)</code>
** Just perf: <code>test.write_perf_keyval(perf_dict)</code>
** Both: <code>test.write_iteration_keyval(attr_dict, perf_dict)</code>
=== Test Objects: Attributes for directories ===
<code>test</code> objects have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>:
<pre>
outputdir      eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir        eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src            eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>
</pre>
<references/>

Latest revision as of 20:57, 25 August 2009