AutoQA resultsdb API

Syntax Description

 * ~ name of the respective method (see )
 * ~ required argument
 * ~ optional argument, default value is set to None
 * ~ optional argument, default value is set to "Foo"
 * ~ method gives back the return_value

start_job
Params Returns
 * ~ link to wiki page with metadata (usefull for frontends)
 * ~ job identifier for Job <-> Testrun relationship.

Intended to be used mostly by the AutoQA scheduler, when one will need to logically connect results of more tests for one package/repo/... The job_id value will then be passed to the test probably via control file (i.e. another argument for )

start_testrun
Params Returns
 * ~ link to wiki page with metadata (usefull for frontends)
 * ~ optional argument, Dictionary (JSON?) of key-value pairs to be stored
 * ~ optional argument. If set, new record will be created in the Job <-> Testrun relationship table.
 * ~ identifier of the record inside Testrun table.

Use to create new entry in the Testrun table. Sets up the start_time and creates new entry in the Job<->Testrun relationship table, if job_id was set. Returns testrun_id which is required as an argument for almost every method. testrun_id is the key identifying the relationship between Testrun and the other tables in database.

end_testrun
Params
 * ~ Testrun identifier (see )
 * ~ PASSED, FAILED, INFO, ... (see <> at ResultsDB schema)
 * ~ URL pointing to logs etc. (most probably in the Autotest storage)
 * ~ Dictionary (JSON?) of key-value pairs to be stored (see ).
 * ~ ? not sure right now, probably name of the file with summary which could be found at
 * ~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at
 * ~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.

Params
 * ~ Testrun identifier
 * ~ PASSED, FAILED, ABORTED ... if non-correct value is passed, NEEDS_INSPECTION is set
 * ~ URL pointing to logs etc. (most probably in the Autotest storage)
 * ~ Dictionary (JSON?) of key-value pairs to be stored
 * ~ ? not sure right now, probably name of the file with summary which could be found at log_url
 * ~ ? not sure right now, probably name of the file, which will contain 'digest' from the logs (created by the test by selecting appropriate error/warn messages etc.) with summary which could be found at log_url
 * - Logged (and possibli a bit filtered) log of stdin/stderr
 * ~ Optional score. This can be any number, the test decides how to use it. It can display the number of errors, or some other metric, like performance for performance tests.

Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.

start_phase
Params
 * ~ Testrun identifier (see )
 * ~ Name of the phase - used for displaying in the frontends.

Some tests may be devided in a number of phases. Phases may be nested, but you always can end only the "most recently started" phase (see ). Each phase has it's own result (see ), but it does not directly influence the Testrun.result (i.e. you still need to set  in the

end_phase
Params
 * ~ Testrun identifier (see )
 * ~ PASSED, FAILED, INFO, ... (see <> at ResultsDB schema)

Ends the "most recently started" phase. The  is used only for frontend purposes, and does not by any way directly influence the Testrun result (at least for the API purposes).

store_keyval
Params
 * ~ Testrun identifier (see )
 * ~ Dictionary (JSON?) of key-value pairs to be stored.

Be aware, that while storing keyval pairs, all non-string keys/values (or lists/tuples of strings in case of values) are skipped without further notice.

Keyval pairs are required/recommended/other additional data specific for each type of test (package test/repo test/install test/... see AutoQA_resultsdb_schema), one can of course add any other keyval pairs, for his/her own frontend etc.

These values, represented by dictionary will be parsed and stored as separate entries in the TestrunData table. Keys will have to be strings, values can be either string or list of strings.

Examples
 * will be saved as one record.
 * will create two rows ( and  )

Simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") end_testrun (testrun_id, "PASSED", log_url)

Phases - simple
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)

Phases - nested
testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page") start_phase (testrun_id, "First phase") start_phase (testrun_id, "Second phase") end_phase (testrun_id, "PASSED") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)

Note: This means phases may be nested, but they may not partially overlap (phase1 may not end while phase2 is active).

Using Job
job_id = start_job testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)

testrun_id = start_testrun ("http://fedoraproject.org/wiki/QA:Some_other_test_page", job_id) start_phase (testrun_id, "First phase") end_phase (testrun_id, "PASSED") end_testrun (testrun_id, "PASSED", log_url)