From Fedora Project Wiki
(link to prototype test executor)
("test runner" "test environment")
 
(89 intermediate revisions by 23 users not shown)
Line 1: Line 1:
= Standard Dicsovery, Packaging, Invocation of Integration Tests =
= Standard Discovery, Staging and Invocation of Integration Tests =


{{admon/warning|This is incomplete|This file is incomplete.}}
== Summary ==


{{admon/warning|This is a proposal|Feedback is more than welcome.
Let's define a clear delineation of between a ''test suite'' (including framework) and the CI system that is running the test suite. This is the standard interface.
There's a ''discussion'' tab above.}}


== Summary ==
[[File:Invoking-tests-standard-interface.png|800px]]
 
What follows is a standard way to discover, package and invoke integration tests for a package stored in a Fedora dist-git repo.


What follows is a standard way to discover, package and invoke integration
Many Fedora packages have unit tests. These tests are typically run during a <code>%check</code> RPM build step and run in a build root. On the other hand, integration testing should happen against a composed system. Upstream projects have integration tests, both Fedora QA and the Atomic Host team would like to create more integration tests, Red Hat would like to bring integration tests upstream.
tests for a package stored in a Fedora dist-git repo.


== Owner ==
== Owner ==
Line 15: Line 15:
* Name: [[User:Stefw| Stef Walter]]
* Name: [[User:Stefw| Stef Walter]]
* Email: stefw@fedoraproject.org
* Email: stefw@fedoraproject.org
* Name: [[User:Pitti| Martin Pitt]]
 
* Email: martin@piware.de
* Name: [[User:pingou|Pierre-Yves Chibon]]
<!-- * Release notes owner: To be assigned by docs team [[User:FASAccountName| Release notes owner name]] <email address> -->
* Email: pingou@fedoraproject.org
 
* Name: [[User:astepano|Andrei Stepanov]]
 
* Name: [[User:sturivny|Serhii Turivnyi]]
* Email: sturivny@fedoraproject.org


== Terminology ==
== Terminology ==


* '''Test Subject''': The items that are to be tested. Typically this is a set of RPMs (updating a package to a new version), a container, or a distribution install ISO/VM image.
* '''Test Subject''': The items that are to be tested.  
* '''Test''': A callable/runnable piece of code and corresponding test data and mocks which exercises and evaluates a ''subject''.
** Examples: RPMs, OCI image, ISO, QCow2, Module repository ...
* '''Test Suite''': The collection of all tests that apply to a ''subject''. It's common to split the testing of different aspects into different tests for easier result evaluation, code maintenance, or parallelization.
* '''Test''': A callable/runnable piece of code and corresponding test data and mocks which exercises and evaluates a ''test subject''.
* '''Test Result''': A boolean output of a ''test'' which decides whether the test passed. This is being used for automatic result processing (gating in Continuous Integration).
** '''Test environment''': environment where actual test run takes place. Test has direct impact on test environment.
* '''Test Artifact''': Any additional output of the ''test'' such as the stdout/err output, log files, screen shots, core dumps, or TAP/Junit/subunit streams. These are mostly for human consumption (evaluating test failures by developers), but in case of machine readable files like JUnit they can also be used in result browsers for presenting test results.
* '''Test Suite''': The collection of all tests that apply to a ''test subject''.  
* '''Test Framework''': A library or component that the ''test suite'' and ''tests'' use to accomplish their job.
** Examples: [https://avocado-framework.github.io/ Avocado], [https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests GNOME Installed Tests], [https://github.com/fedora-modularity/meta-test-family/ Meta Test Family], [https://github.com/projectatomic/atomic-host-tests Ansible tests in Atomic Host], [https://tunir.readthedocs.io/en/latest/ Tunir tests], docker test images, ...
* '''Test Result''': A boolean pass/fail output of a ''test suite''.  
** ''Test results'' are for consumption by automated aspects of a ''testing systems''.
* '''Test Artifact''': Any additional output of the test suite such as the stdout/stderr output, log files, screenshots, core dumps, or TAP/Junit/subunit streams.  
** ''Test artifacts'' are for consumption by humans, archival or big data analysis.
* '''Testing System''': A CI or other ''testing system'' that would like to discover, stage and invoke tests for a ''test subject''.
** Examples: [https://jenkins.io/ Jenkins], [https://taskotron.fedoraproject.org/ Taskotron], [https://docs.openstack.org/infra/zuul/ ZUUL], [https://ci.centos.org/ CentOS CI], [https://github.com/projectatomic/papr Papr], [https://travis-ci.org/ Travis], [https://semaphoreci.com/ Semaphore], [https://developers.openshift.com/managing-your-applications/continuous-integration.html Openshift CI/CD], [https://wiki.ubuntu.com/ProposedMigration/AutopkgtestInfrastructure Ubuntu CI], ...
** Testing system uses '''test runner''' as a place for running tests. Testing system delegates test-running to test runner. Test runner examples: local machine, VM, ...


== Detailed Description ==
== Responsibilities ==
 
The '''testing system''' is responsible to:
* Build or otherwise acquire the ''test subject'', such as a package, container image, tree …
* Decide which ''test suite'' to run, often by using the standard interface to discover appropriate ''tests'' for the dist-git repo that a test subject originated in.
* Schedule, provision or orchestrate a job to run the ''test suite'' on appropriate compute, storage, ...
* Stage the ''test suite'' as described by the ''standard interface''.
* Invoke the ''test suite'' as described by the ''standard interface''.
* Gather the ''test results'' and ''test artifacts'' as described by the ''standard interface''.
* Announce and relay the ''test results'' and ''test artifacts'' for gating, archival ...
 
The '''standard interface''' describes how to:
* Discover a ''test suite'' for a given dist-git repo.
* Uniquely identify a ''test suite''.
* Stage a ''test suite'' and its dependencies such as ''test frameworks''.
* Provide the ''test subject'' to the ''test suite''.
* Invoke a ''test suite'' in a consistent way.
* Gather ''test results'' and ''test artifacts'' from the invoked ''test suite''.


Many Fedora packages have unit tests. These tests are typically run during a
The '''test suite''' is responsible to:
<code>%check</code> RPM build step. These tests run in the build root in which
* Declare its dependencies such as a ''test framework'' via the ''standard interface''.
the package is built and may involve the <code>BuildRequire</code> dependencies.
* Execute the ''test framework'' as necessary.
* Provision (usually locally) any containers or virtual machines necessary for testing the ''test subject''.
* Provide ''test results'' and ''test subjects'' back according to the standard 


On the other hand, integration testing happens against a composed system.
The format of the textual logs and ''test artifacts'' that come out of a test suite is not prescribed by this document. Nor is it envisioned to be standardized across all possible ''test suites''.
Several upstream projects have integration tests, but these integration tests have
typically not been used or packaged in Fedora. Both Fedora QA and the Atomic Host
team would like to create more integration tests, stored in dist-git, and use a CI
system to run them on Fedora packages.


What follows is a standard way to discover, package and invoke integration
== Requirements ==
tests for a package stored in a Fedora dist-git repo.


It is important that the CI system which runs the test is independent from the
* The ''test suite'' and ''test framework'' SHOULD NOT leak its implementation details into the testing system, other than via the ''standard interface''.
layout of the test themselves.
* The ''test suite'' and ''test framework'' SHOULD NOT rely on the behavior of the testing system other than the ''standard interface''.
* The ''standard interface'' MUST enable a dist-git packager to run a ''test suite'' locally.
** ''Test suites'' or ''test frameworks'' MAY call out to the network for certain tasks.
* It MUST be possible to stage an upstream ''test suite'' using the ''standard interface''.
* Both ''in-situ tests'', and more rigorous ''outside-in tests'' MUST be possible with the ''standard interface''.
** For ''in-situ tests'' the ''test suite'' is in the same file system tree and process space as the ''test subject''.
** For ''outside-in tests'' the ''test suite'' is outside of the file system tree and process space of the ''test subject''.
* The ''test suite'' and ''test framework'' SHOULD be able to provision containers and virtual machines necessary for its testing without requesting them from the ''testing system''.
* The ''standard interface'' SHOULD describe how to uniquely identify a ''test suite'',


'''Goals:'''
== Benefit to Fedora ==


* Implementation details of the integration test suite or its framework should not leak into or rely on the CI system invoking the test.
Developers benefit by having a consistent target for how to describe tests, while also being able to execute them locally while debugging issues or iterating on tests.
* CI system implementation details should not leak into the test suite or its metadata.
* It should be possible to change the CI system that runs a test suite.
* It should be possible for a dist-git packager to run a test suite locally.


'''Requirements:'''
By staging and invoking tests consistently in Fedora we create an eco-system for the tests that allows varied test frameworks as well as CI system infrastructure to interoperate. The integration tests outlast the implementation details of either the frameworks they're written in or the CI systems running them.


* Declare and install test suite dependencies cleanly
== Detailed Description ==
* Standard way to locally invoke a test process
* Standard location to place test subjects, results, and artifacts


What follows is a standard way to discover, package and invoke integration
This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the ''testing system'' from the ''test suite'' and its framework. It is also important to allow Fedora packagers to locally and manually invoke a ''test suite''.
tests for a package stored in a Fedora dist-git repo.


The integration tests are packaged and delivered through Fedora as
'''First see the [https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology Terminogy] division of [https://fedoraproject.org/wiki/Changes/InvokingTests#Responsibilities Responsibilities] and [https://fedoraproject.org/wiki/Changes/InvokingTests#Requirements Requirements]'''
<code>%{name}-tests</code> subpackages of the package they are associated with.


=== Packaging ===
=== Staging ===


Each dist-git repo that has integration tests should package those tests in a
Tests files will be added into the <code>tests/</code> folder of a dist-git repository branch. The structure of the files and folders is left to the liberty of the packagers but there are one or more playbooks in the <code>tests/</code> folder that can be invoked to run the test suites.
<code>%{name}-tests</code> subpackage. This is similar to the
<code>%{name}-debuginfo</code> or <code>%{name}-docs</code> subpackages we have
today.


The spec file for a dist-git repo may include upstream integration tests into
# The ''testing system'' SHOULD stage the tests on target (eg: Fedora) operating system appropriate for the branch name of the dist-git repository containing the tests.
its <code>%{name}-tests</code> subpackage. The spec file may also include tests
# The ''testing system'' SHOULD stage a clean a system for each set of tests it runs.
directly from files in <code>tests/</code> subdirectory of the dist-git repo itself.
# The ''testing system'' MUST stage the following packages:
## <code>ansible python2-dnf libselinux-python standard-test-roles</code>
# The ''testing system'' MUST clone the dist-git repository for the test, and checks out the appropriate branch.
# The contents of <code>/etc/yum.repos.d</code> on the staged system SHOULD be replaced with repository information that reflects the known good Fedora packages corresponding to the branch of the dist-git repository.
## The ''testing system'' MAY use multiple repositories, including ''updates'' or ''updates-testing'' to ensure this.


=== Invocation ===
=== Invocation ===


To invoke the test suite, the subpackage is installed. Each test of the suite
The testing system MUST run each playbook matching the glob <code>tests/tests*.yml</code> in the dist-git repo. Each of these files constitutes a
installs an executable in the path <code>/usr/tests/</code>.
test suite. Each test suite is invoked independently by the testing system as follows.


To invoke the test suite, one would:
The ''test subjects'' are passed to the playbook and inventory as operating system environment and ansible environment. Often only one ''test subject'' is passed in. However multiple subjects may be concatenated together in a shell escaped string. The playbooks and/or inventory script split the string. The extensions as follows are used to determine the type of subject:


# Create a temporary directory, called `$TESTDIR` here.
{|
# Place the subjects being tested in <code>$TESTDIR/subjects/</code>
! Identifier !! Test subject
# Execute all executable files in <code>/usr/tests/</code> one at a time.
|-
## The test is invoked with a working directory of <code>$TESTDIR</code>
| *.rpm    || Absolute path to an RPM file
## The test is invoked as root, and may drop privileges as desired
|-
## Treat the stdout/stderr of each tests process as the test log. This is a standard artifact and written to <code>$TESTDIR/artifacts/testname.log</code>.
| *.repo    || Absolute repo filenames appropriate for <code>/etc/yum.repos.d</code>
## Examine the exit code of each test process. Zero exit code is success, non-zero is failure.
|-
# Tests can put any additional artifacts like screenshots into <code>$TESTDIR/artifacts/</code>.
| *.qcow2, *.qcow2c || Absolute path to one virtual machine disk image bootable with cloud-init
|-
| *.oci    || Absolute path of one OCI container image filesystem bundle
|-
| docker:*  || Fully qualified path to a docker image in a registry
|-
| ...       || Other ''test subject'' identifiers may be added later.
|}


This ensures that tests can be run on a production system without accidentally clobbering permanent directories,
don't require root privileges (simplifies test development), and that CI systems have one unique place from where
to collect artifacts. It also avoids collecting temporary files such as downloaded container or VM images as artifacts,
as these would usually get stored for a longer time period.


These steps would usually be done through a standard test driver tool (particularly for sensible stdout/err teeing and log capturing),
Various ''tests'' in a playbook constitute a ''test suite''. Some parts of these ''test suites'' will run in certain contexts, against certain deliverable artifacts. Certain tests will run against Atomic Host deliverables, while others will not. Certain tests will run against Docker deliverables while others will not. This is related to, but does not exactly overlap with the ''test subject'' identifiers above. Ansible tags are used to denote these contexts.
but its usage is not mandatory for developing and calling tests manually.


Multiple subpackages may be installed as long as their dependencies do not conflict.
{|
! Tag      !! Test context
|-
| atomic    || Atomic Host
|-
| container || A Docker or OCI container
|-
| classic  || Tested against a classic installed YUM/DNF installed system.
|-
| ...      || Other ''test subject'' identifiers may be added later.
|}


=== Staging ===


The <code>%{name}-test</code> subpackage should <code>Requires:</code> all other packages
To invoke the tests, the ''testing system'' must perform the following tasks for each ''test suite'' playbook:
that the testsuite executable needs in order to run. This includes libraries or frameworks,
 
or subsystems like <code>libvirt</code>.
# MUST execute the playbook with the following operating system environment variables:
## <code>TEST_SUBJECTS</code>: The ''test subjects'' string as described above
## <code>TEST_ARTIFACTS</code>: The full path of an empty folder for ''test artifacts''
# MUST execute the playbook with the following Ansible variables.
## <code>subjects</code>: The ''test subjects'' string as described above
## <code>artifacts</code>: The full path of an empty folder for ''test artifacts''
# SHOULD execute the playbook with all Ansible tags best represent the intended ''test context''.
## The choice of ''test context'' tags is related to the ''test subject'' being tested
# MUST execute Ansible with inventory set to the full path of the file or directory <code>tests/inventory</code> if it exists.
## If the <code>tests/inventory</code> file doesn't exist, then the following inventory SHOULD be used as a default:<br> <code>/usr/share/ansible/inventory</code>
# MUST execute the playbook as root.
# MUST examine the exit code of the playbook. A zero exit code is successful ''test result'', non-zero is failure.
# MUST treat the file <code>test.log</code> in the <code>artifacts</code> folder as the main readable output of the test.
# SHOULD place the textual stdout/stderr of the <code>ansible-playbook</code> command in the <code>ansible.log</code> file in the <code>artifacts</code> folder.
# SHOULD treat the contents of the <code>artifacts</code> folder as the ''test artifacts''.
 
Each ''test suite'' playbook or ''test framework'' contained therein:


Some integration tests may choose to test in-situ, on the system on which the test suite
# SHOULD drop privileges appropriately if the ''test suite'' should be run as non-root.
is installed. In these cases the <code>%{name}-tests</code> subpackage should directly
# MUST install any requirements of its ''test suite'' or ''test framework'' and MUST fail if this is not possible.
depend on the package being tested.
# MUST provision the ''test subject'' listed in the <code>subjects</code> variable appropriately for its playbook name (described above) and MUST fail if this is not possible.
# MUST place the main readable output of the ''test suite'' into a <code>test.log</code> file in the <code>artifacts</code> variable folder. This MUST happen even if some of the test suites fail.
# SHOULD place additional ''test artifacts'' in the folder defined in the <code>artifacts</code> variable.
# MUST return a zero exit code of the playbook if the ''test result'' is a pass, or a non-zero exit code if the ''test result'' is a fail.


More rigourous integration tests test an integrated system from the outside. It is the
If an inventory file or script exists, it:
responsibility of the <code>%{name}-tests</code> subpackages to provision virtual
machines or containers necessary to do such testing. In almost all cases this will happen
by way of a provisioning framework such as Avocado, Ansible, Module Testing Framework,
linch-pin, etc.


# MUST describe where to invoke the playbook and how to connect to that target.
# SHOULD launch or install any supported <code>$TEST_SUBJECTS</code> so that the playbook can be invoked against them.
# SHOULD put relevant logs in the <code>$TEST_ARTIFACTS</code> directory.


=== Discovery ===
=== Discovery ===


A <code>check</code> file in a dist-git repo should list the various names of
Test discovery is done via dist-git. Both packages and modules may have tests in this format. To list which ''test context'' a given dist-git directory or playbook is relevant for, use a command like the following:
packages that should be installed in order to run the integration tests for that
package. If this file is empty then the list of packages will default to
<code>%{name}-tests</code>.


The format of this file has not yet been defined, but a simple text file similar
<pre>
to sources listing package N[VR]'s may suffice.
# ansible-playbook --list-tags tests.yml
</pre>


== Scope ==
== Scope ==


This change requires no changes to Fedora infrastructure itself. It is limited
Since the tests are added in a sub-folder of the dist-git repo, there are no changes required to the Fedora infrastructure and will have no impact on the packagers' workflow and tooling.
to changes in dist-git repos. However certain key infrastructure changes could
mitigate usability or side-effects of this change.


== Benefit to Fedora ==
Only the testing system will need to be taught to install the requirements and run the playbooks.


Developers benefit by having a consistent target for how to describe tests,
== User Experience ==
while also being able to execute them locally while debugging issues or
iterating on tests.


By packaging, staging and invoking tests consistently in Fedora we
A standard way to package, store and run tests benefits Fedora stability, and makes Fedora better for users.
create an eco-system for the tests that allows varied test frameworks as
well as CI system infrastructure to interoperate. The integration tests
outlast the implementation details of either the frameworks they're written
in or the CI systems running them.


== User Experience ==
* This structure makes it easy to run locally thus potentially reproducing an error triggered on the test system.
* Ansible is being more and more popular, thus making it easier for people to contribute new tests
* Used by a lot of sys-admin, ansible could help sys-admin to bring test-cases to the packagers and developers about situation where something failed for them.


Users benefit by having tests that they can reproduce on their own systems.
== Upgrade/compatibility impact ==
They can install the similar to how they consume <code>%{name}-doc</code>
or <code>%{name}-debuginfo</code> subpackages today.


We may choose to avoid having such packages available in the standard repositories.
There are no real upgrade or compatibility impact. The tests will be branched per release as spec files are branched dist-git is now.
We may choose to only have them in <code>updates-testing</code> or an
arrangement similar to <code>debuginfo</code>. These choices will require some
markup and/or change to infrastructure.


== Upgrade/compatibility impact ==
== Documentation ==


Although there may already be subpackages that are named
* [[CI|CI Landing page]]
<code>%{name}-tests</code> this is merely a convention, and such packages will
* [[CI/Tests|Documentation page]]
not affect the behavior of this proposal.


== Examples ==
== Proposals and Evaluation ==


TODO: Build out this section
During the selection process for a standard test invocation and and layout format for Fedora, [[Changes/InvokingTestsProposals|several proposals]] were examined.  
* [https://github.com/martinpitt/fedora-gzip-test/commits/master Add simple downstream integration test for gzip] (taken verbatim from [https://patches.ubuntu.com/g/gzip/ Ubuntu package])
* [http://piware.de/tmp/run-fedtest run-fedtest prototype script] to run all <code>/usr/tests/*</code>, pass arbitrary subjects to them, and report/capture the results/logs
== Notes ==


* ...
[[Category:FedoraAtomicCi]]
* ...
[[Category:FedoraCi]]

Latest revision as of 16:20, 30 January 2018

Standard Discovery, Staging and Invocation of Integration Tests

Summary

Let's define a clear delineation of between a test suite (including framework) and the CI system that is running the test suite. This is the standard interface.

What follows is a standard way to discover, package and invoke integration tests for a package stored in a Fedora dist-git repo.

Many Fedora packages have unit tests. These tests are typically run during a %check RPM build step and run in a build root. On the other hand, integration testing should happen against a composed system. Upstream projects have integration tests, both Fedora QA and the Atomic Host team would like to create more integration tests, Red Hat would like to bring integration tests upstream.

Owner

Terminology

  • Test Subject: The items that are to be tested.
    • Examples: RPMs, OCI image, ISO, QCow2, Module repository ...
  • Test: A callable/runnable piece of code and corresponding test data and mocks which exercises and evaluates a test subject.
    • Test environment: environment where actual test run takes place. Test has direct impact on test environment.
  • Test Suite: The collection of all tests that apply to a test subject.
  • Test Framework: A library or component that the test suite and tests use to accomplish their job.
  • Test Result: A boolean pass/fail output of a test suite.
    • Test results are for consumption by automated aspects of a testing systems.
  • Test Artifact: Any additional output of the test suite such as the stdout/stderr output, log files, screenshots, core dumps, or TAP/Junit/subunit streams.
    • Test artifacts are for consumption by humans, archival or big data analysis.
  • Testing System: A CI or other testing system that would like to discover, stage and invoke tests for a test subject.

Responsibilities

The testing system is responsible to:

  • Build or otherwise acquire the test subject, such as a package, container image, tree …
  • Decide which test suite to run, often by using the standard interface to discover appropriate tests for the dist-git repo that a test subject originated in.
  • Schedule, provision or orchestrate a job to run the test suite on appropriate compute, storage, ...
  • Stage the test suite as described by the standard interface.
  • Invoke the test suite as described by the standard interface.
  • Gather the test results and test artifacts as described by the standard interface.
  • Announce and relay the test results and test artifacts for gating, archival ...

The standard interface describes how to:

  • Discover a test suite for a given dist-git repo.
  • Uniquely identify a test suite.
  • Stage a test suite and its dependencies such as test frameworks.
  • Provide the test subject to the test suite.
  • Invoke a test suite in a consistent way.
  • Gather test results and test artifacts from the invoked test suite.

The test suite is responsible to:

  • Declare its dependencies such as a test framework via the standard interface.
  • Execute the test framework as necessary.
  • Provision (usually locally) any containers or virtual machines necessary for testing the test subject.
  • Provide test results and test subjects back according to the standard

The format of the textual logs and test artifacts that come out of a test suite is not prescribed by this document. Nor is it envisioned to be standardized across all possible test suites.

Requirements

  • The test suite and test framework SHOULD NOT leak its implementation details into the testing system, other than via the standard interface.
  • The test suite and test framework SHOULD NOT rely on the behavior of the testing system other than the standard interface.
  • The standard interface MUST enable a dist-git packager to run a test suite locally.
    • Test suites or test frameworks MAY call out to the network for certain tasks.
  • It MUST be possible to stage an upstream test suite using the standard interface.
  • Both in-situ tests, and more rigorous outside-in tests MUST be possible with the standard interface.
    • For in-situ tests the test suite is in the same file system tree and process space as the test subject.
    • For outside-in tests the test suite is outside of the file system tree and process space of the test subject.
  • The test suite and test framework SHOULD be able to provision containers and virtual machines necessary for its testing without requesting them from the testing system.
  • The standard interface SHOULD describe how to uniquely identify a test suite,

Benefit to Fedora

Developers benefit by having a consistent target for how to describe tests, while also being able to execute them locally while debugging issues or iterating on tests.

By staging and invoking tests consistently in Fedora we create an eco-system for the tests that allows varied test frameworks as well as CI system infrastructure to interoperate. The integration tests outlast the implementation details of either the frameworks they're written in or the CI systems running them.

Detailed Description

This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the testing system from the test suite and its framework. It is also important to allow Fedora packagers to locally and manually invoke a test suite.

First see the Terminogy division of Responsibilities and Requirements

Staging

Tests files will be added into the tests/ folder of a dist-git repository branch. The structure of the files and folders is left to the liberty of the packagers but there are one or more playbooks in the tests/ folder that can be invoked to run the test suites.

  1. The testing system SHOULD stage the tests on target (eg: Fedora) operating system appropriate for the branch name of the dist-git repository containing the tests.
  2. The testing system SHOULD stage a clean a system for each set of tests it runs.
  3. The testing system MUST stage the following packages:
    1. ansible python2-dnf libselinux-python standard-test-roles
  4. The testing system MUST clone the dist-git repository for the test, and checks out the appropriate branch.
  5. The contents of /etc/yum.repos.d on the staged system SHOULD be replaced with repository information that reflects the known good Fedora packages corresponding to the branch of the dist-git repository.
    1. The testing system MAY use multiple repositories, including updates or updates-testing to ensure this.

Invocation

The testing system MUST run each playbook matching the glob tests/tests*.yml in the dist-git repo. Each of these files constitutes a test suite. Each test suite is invoked independently by the testing system as follows.

The test subjects are passed to the playbook and inventory as operating system environment and ansible environment. Often only one test subject is passed in. However multiple subjects may be concatenated together in a shell escaped string. The playbooks and/or inventory script split the string. The extensions as follows are used to determine the type of subject:

Identifier Test subject
*.rpm Absolute path to an RPM file
*.repo Absolute repo filenames appropriate for /etc/yum.repos.d
*.qcow2, *.qcow2c Absolute path to one virtual machine disk image bootable with cloud-init
*.oci Absolute path of one OCI container image filesystem bundle
docker:* Fully qualified path to a docker image in a registry
... Other test subject identifiers may be added later.


Various tests in a playbook constitute a test suite. Some parts of these test suites will run in certain contexts, against certain deliverable artifacts. Certain tests will run against Atomic Host deliverables, while others will not. Certain tests will run against Docker deliverables while others will not. This is related to, but does not exactly overlap with the test subject identifiers above. Ansible tags are used to denote these contexts.

Tag Test context
atomic Atomic Host
container A Docker or OCI container
classic Tested against a classic installed YUM/DNF installed system.
... Other test subject identifiers may be added later.


To invoke the tests, the testing system must perform the following tasks for each test suite playbook:

  1. MUST execute the playbook with the following operating system environment variables:
    1. TEST_SUBJECTS: The test subjects string as described above
    2. TEST_ARTIFACTS: The full path of an empty folder for test artifacts
  2. MUST execute the playbook with the following Ansible variables.
    1. subjects: The test subjects string as described above
    2. artifacts: The full path of an empty folder for test artifacts
  3. SHOULD execute the playbook with all Ansible tags best represent the intended test context.
    1. The choice of test context tags is related to the test subject being tested
  4. MUST execute Ansible with inventory set to the full path of the file or directory tests/inventory if it exists.
    1. If the tests/inventory file doesn't exist, then the following inventory SHOULD be used as a default:
      /usr/share/ansible/inventory
  5. MUST execute the playbook as root.
  6. MUST examine the exit code of the playbook. A zero exit code is successful test result, non-zero is failure.
  7. MUST treat the file test.log in the artifacts folder as the main readable output of the test.
  8. SHOULD place the textual stdout/stderr of the ansible-playbook command in the ansible.log file in the artifacts folder.
  9. SHOULD treat the contents of the artifacts folder as the test artifacts.

Each test suite playbook or test framework contained therein:

  1. SHOULD drop privileges appropriately if the test suite should be run as non-root.
  2. MUST install any requirements of its test suite or test framework and MUST fail if this is not possible.
  3. MUST provision the test subject listed in the subjects variable appropriately for its playbook name (described above) and MUST fail if this is not possible.
  4. MUST place the main readable output of the test suite into a test.log file in the artifacts variable folder. This MUST happen even if some of the test suites fail.
  5. SHOULD place additional test artifacts in the folder defined in the artifacts variable.
  6. MUST return a zero exit code of the playbook if the test result is a pass, or a non-zero exit code if the test result is a fail.

If an inventory file or script exists, it:

  1. MUST describe where to invoke the playbook and how to connect to that target.
  2. SHOULD launch or install any supported $TEST_SUBJECTS so that the playbook can be invoked against them.
  3. SHOULD put relevant logs in the $TEST_ARTIFACTS directory.

Discovery

Test discovery is done via dist-git. Both packages and modules may have tests in this format. To list which test context a given dist-git directory or playbook is relevant for, use a command like the following:

# ansible-playbook --list-tags tests.yml

Scope

Since the tests are added in a sub-folder of the dist-git repo, there are no changes required to the Fedora infrastructure and will have no impact on the packagers' workflow and tooling.

Only the testing system will need to be taught to install the requirements and run the playbooks.

User Experience

A standard way to package, store and run tests benefits Fedora stability, and makes Fedora better for users.

  • This structure makes it easy to run locally thus potentially reproducing an error triggered on the test system.
  • Ansible is being more and more popular, thus making it easier for people to contribute new tests
  • Used by a lot of sys-admin, ansible could help sys-admin to bring test-cases to the packagers and developers about situation where something failed for them.

Upgrade/compatibility impact

There are no real upgrade or compatibility impact. The tests will be branched per release as spec files are branched dist-git is now.

Documentation

Proposals and Evaluation

During the selection process for a standard test invocation and and layout format for Fedora, several proposals were examined.