From Fedora Project Wiki

< User:Tflink‎ | AutomationFrameworkEvaluation

Revision as of 15:09, 10 June 2013 by Fabiand (talk | contribs) (Initial evaluation)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Not done yet
There are some unclear points, the informations will be filled in as soon as the points are cleared

General Information

Communication and Community

  • how many projects are using it
    • Igor is used in the oVirt Project for oVirt Node. Additionally internally by Red Hat and IBM.
  • how old is it
    • Approx. one year (1yr)
  • how many active devs from how many orgs
    • About one (1) dev, from Red Hat
  • quality of docs
    • Not bad, getting better
  • how much mailing list traffic is there?
    • No ML right now
  • what is the bug tracker?
    • RHBZ
  • what is the patch process?
    • Currently through gitorious merge requests (migration to github planned)
  • what is the RFE process?
    • RFE bug in bz

High level stuff

  • how tightly integrated are the components
    • What components?
  • what license is the project released under
    • GPL2+
  • how much is already packaged in fedora
    • igord is completely packaged
    • igor client (needed to be part of th eos under test) is not yet packaged


  • what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
    • RESTful XML/JSON
  • can you schedule jobs through the api
    • yes
  • what scheduling params are available through the api
    • (testsuite, host, profile, [kargs]) tuple


  • how flexible is the schema for the built in results store
    • A job consists of many steps (testcases) there can be one result for each step (testcase)
    • each step (testcase) can additionally be annotated and artifacts (files) can be attached
  • what data is stored in the default result
    • igor has no persistent result store, results are lost after the daemon quits
    • a hook mechanism is inteded to feed the results (available in xml/json) into the actual store (jenkins in the current case, via junit)
    • common results available are: pass/fail, job informations, testsuite informations, host informations, artifacats, annotations
  • is there a difference between failed execution and status based on result analysis
    • could you rephrase this question?
  • what kinds of analysis are supported
    • some basic passed/failed analysis is done, notification of any followup/extrernal component is inteded to be realized by hooks
    • currently there is a web ui, a junit output and a fancy cli application displaying the results

VM management

  • does it work with any external systems (ovirt, openstack etc.)
    • there is currently a livirt backend
  • does it support rapid cloning
    • not yet, but it's possible to add
  • how are vms configured post-spawn
    • vms are only configured through kernel arguments (a kernel argument can spawn a kickstart based instalation, then the vm can additionally be configured using thes kickstart file)
  • control over vm configuration (vnc/spice, storage type etc.)
    • yes, through the API
  • ephemeral client support?
    • it's a key feature of igor
    • volatile and "persistent" VMs are supported as well as real hosts

Test harness

  • base language
    • simple harness written in python, with a bash wrapper exposing all py functions
  • how tightly integrated is it with the system as a whole
    • makes some basic assumptions about the base os (vcs, uinput, bash, ..)
  • are any non-primary harnesses supported
    • basically yes, but no other is yet provided, xpresserng is a candidate
    • another party is working on autotest support

Test execution

  • how are tests stored
    • tests are executables (typical python or bash), stored in a filesystem
    • the SUT retrieves them via the API
  • support for storing tests in vcs
    • yes, is done in ovirt-node
  • method for passing data into test for execution
    • yes, via API
    • alternative is i nthe works
  • how are parameters stored for post-failure analysis
    • all parameters are kept in the job status (vcan be retrieved via API)
  • support for replaying a test
    • can re-run previous job
  • can tests be executed locally in a dev env with MINIMAL setup
    • yes, e.g. the script itself
    • yes, using the libvirt-only configuration, where only libvirt is required and used to run the tests
  • external log shipping?
    • could you rephrase this question?
    • any command can be run as long as it's availabel or distributed using the "lib" feature of igor
  • how tightly integrated is result reporting
    • result reporting is done using the API, the UT is using the API, as well as any client triggering jobs.
    • the SUT can report results, add annotations, even trigger another job or add artifacts
  • what kind of latency is there between tests?
    • if anew VM is created for a job there is time to prepare it
    • if a VM is re-used there are a couple of seconds of management work done in the background
    • if a real host is used it really depends on how quickly the host boots.