From Fedora Project Wiki
Not done yet
There are some unclear points, the informations will be filled in as soon as the points are cleared

General Information

Communication and Community

  • how many projects are using it
    • Igor is used in the oVirt Project for oVirt Node. Additionally internally by Red Hat and IBM.
  • how old is it
    • Approx. one year (1yr)
  • how many active devs from how many orgs
    • About one (1) dev, from Red Hat
  • quality of docs
    • Not bad, getting better
  • how much mailing list traffic is there?
    • No ML right now
  • what is the bug tracker?
    • RHBZ
  • what is the patch process?
    • Currently through gitorious merge requests (migration to github planned)
  • what is the RFE process?
    • RFE bug in bz

High level stuff

  • how tightly integrated are the components
    • the components (roughly runner, harness and scheduler) are loosly coupled
    • The idea is that igor is triggered by a external scheduler (e.g. jenkins)
    • To run any testcases igor relies on a small client which needs to be part of the OS under test (igor-service)
    • The harness is basically independent of igor. Igor's harness can be used for additional features like annotations and artifacts, but the basic reporting is done through the igor service (which reports back the reuslts of each step/testcase)
  • what license is the project released under
    • GPL2+
  • how much is already packaged in fedora
    • igord is completely packaged
    • igor client (needed to be part of th eos under test) is not yet packaged

API

  • what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
    • RESTful XML/JSON
  • can you schedule jobs through the api
    • yes
  • what scheduling params are available through the api
    • (testsuite, host, profile, [kargs]) tuple

Results

  • how flexible is the schema for the built in results store
    • A job consists of many steps (testcases) there can be one result for each step (testcase)
    • each step (testcase) can additionally be annotated and artifacts (files) can be attached
  • what data is stored in the default result
    • igor has no persistent result store, results are lost after the daemon quits
    • a hook mechanism is inteded to feed the results (available in xml/json) into the actual store (jenkins in the current case, via junit)
    • common results available are: pass/fail, job informations, testsuite informations, host informations, artifacats, annotations
  • is there a difference between failed execution and status based on result analysis
    • the state of a test is differentiated in: passed, failed, aborted
    • the state of a testcase is differentiated in: passed, failed, aborted, skipped, queued
  • what kinds of analysis are supported
    • some basic passed/failed analysis is done, notification of any followup/extrernal component is inteded to be realized by hooks
    • currently there is a web ui, a junit output and a fancy cli application displaying the results

VM management

  • does it work with any external systems (ovirt, openstack etc.)
    • there is currently a livirt backend
  • does it support rapid cloning
    • not yet, but it's possible to add
  • how are vms configured post-spawn
    • vms are only configured through kernel arguments (a kernel argument can spawn a kickstart based instalation, then the vm can additionally be configured using thes kickstart file)
  • control over vm configuration (vnc/spice, storage type etc.)
    • yes, through the API
  • ephemeral client support?
    • it's a key feature of igor
    • volatile and "persistent" VMs are supported as well as real hosts

Test harness

  • base language
    • simple harness written in python, with a bash wrapper exposing all py functions
  • how tightly integrated is it with the system as a whole
    • makes some basic assumptions about the base os (vcs, uinput, bash, ..)
  • are any non-primary harnesses supported
    • basically yes, but no other is yet provided, xpresserng is a candidate
    • another party is working on autotest support

Test execution

  • how are tests stored
    • tests are executables (typical python or bash), stored in a filesystem
    • the SUT retrieves them via the API
  • support for storing tests in vcs
    • yes, is done in ovirt-node
  • method for passing data into test for execution
    • yes, via API
    • alternative is i nthe works
  • how are parameters stored for post-failure analysis
    • all parameters are kept in the job status (vcan be retrieved via API)
  • support for replaying a test
    • can re-run previous job
  • can tests be executed locally in a dev env with MINIMAL setup
    • yes, e.g. the script itself
    • yes, using the libvirt-only configuration, where only libvirt is required and used to run the tests
  • external log shipping?
    • any command can be run as long as it's availabel or distributed using the "lib" feature of igor, the dfeault harness offers a set of functions to report infos back to the daemon
    • the daemon offers hooks which are called on job state changes, they can be used to connect igor to other tools
  • how tightly integrated is result reporting
    • result reporting is done using the API
    • Also the client which is triggering jobs is using the same API
    • the SUT can report results, add annotations, even trigger another job or add artifacts
  • what kind of latency is there between tests?
    • if anew VM is created for a job there is time to prepare it
    • if a VM is re-used there are a couple of seconds of management work done in the background
    • if a real host is used it really depends on how quickly the host boots.