From Fedora Project Wiki
No edit summary
Line 132: Line 132:
|-
|-
|}
|}
[[Category:FedoraAtomicCi]]
[[Category:FedoraCi]]

Revision as of 14:42, 25 April 2017

Standard Discovery, Packaging, Invocation of Integration Tests

Summary

Lets define a clear delineation of between a test suite (including framework) and the CI system that is running the test suite. This is the standard interface.

Invoking-tests-standard-interface.png

What follows is a standard way to discover, package and invoke integration tests for a package stored in a Fedora dist-git repo.

Many Fedora packages have unit tests. These tests are typically run during a %check RPM build step and run in a build root. On the other hand, integration testing should happen against a composed system. Upstream projects have integration tests, both Fedora QA and the Atomic Host team would like to create more integration tests, Red Hat would like to bring integration tests upstream.

Owner

Current Proposals

There are currently three proposals for how to implement this change and a final decision has not yet been made as to which is the final proposal. The two current proposals are:

Terminology

  • Test Subject: The items that are to be tested.
    • Examples: RPMs, OCI image, ISO, QCow2, Module repository ...
  • Test: A callable/runnable piece of code and corresponding test data and mocks which exercises and evaluates a test subject.
  • Test Suite: The collection of all tests that apply to a test subject.
  • Test Framework: A library or component that the test suite and tests use to accomplish their job.
  • Test Result: A boolean pass/fail output of a test suite.
    • Test results are for consumption by automated aspects of a testing systems.
  • Test Artifact: Any additional output of the test suite such as the stdout/stderr output, log files, screenshots, core dumps, or TAP/Junit/subunit streams.
    • Test artifacts are for consumption by humans, archival or big data analysis.
  • Testing System: A CI or other testing system that would like to discover, stage and invoke tests for a test subject.

Responsibilities

The testing system is responsible to:

  • Build or otherwise acquire the test subject, such as package, container image, tree …
  • Decide which test suite to run, often by using the standard interface to discover appropriate tests for the dist-git repo that a test subject originated in.
  • Schedule, provision or orchestrate a job to run the test suite on appropriate compute, storage, ...
  • Stage the test suite as described by the standard interface.
  • Invoke the test suite as described by the standard interface.
  • Gather the test results and test artifacts as described by the standard interface.
  • Announce and relay the test results and test artifacts for gating, archival ...

The standard interface describes how to:

  • Discover a test suite for a given dist-git repo.
  • Uniquely identify a test suite.
  • Stage a test suite and its dependencies such as test frameworks.
  • Provide the test subject to the test suite.
  • Invoke a test suite in a consistent way.
  • Gather test results and test artifacts from the invoked test suite.

The test suite is responsible to:

  • Declare its dependencies such as a test framework via the standard interface.
  • Execute the test framework as necessary.
  • Provision (usually locally) any containers or virtual machines necessary for testing the test subject.
  • Provide test results and test subjects back according to the standard

The format of the textual logs and test artifacts that come out of a test suite is not prescribed by this document. Nor is it envisioned to be standardized across all possible test suites.

Requirements

  • The test suite and test framework SHOULD NOT leak its implementation details into the testing system, other than via the standard interface.
  • The test suite and test framework SHOULD NOT rely on behavior of the testing system other than the standard interface.
  • The standard interface MUST enable a dist-git packager to run a test suite locally.
    • Test suites or test frameworks MAY call out to the network for certain tasks.
  • It MUST be possible to stage an upstream test suite using the standard interface.
  • Both in-situ tests, and more rigorous outside-in tests MUST be possible with the standard interface.
    • For in-situ tests the test suite is in the same file system tree and process space as the test subject.
    • For outside-in tests the test suite is outside of the file system tree and process space of the test subject.
  • The test suite and test framework SHOULD be able to provision containers and virtual machines necessary for its testing without requesting them from the testing system.
  • The standard interface SHOULD describe how to uniquely identify a test suite,


Benefit to Fedora

Developers benefit by having a consistent target for how to describe tests, while also being able to execute them locally while debugging issues or iterating on tests.

By staging and invoking tests consistently in Fedora we create an eco-system for the tests that allows varied test frameworks as well as CI system infrastructure to interoperate. The integration tests outlast the implementation details of either the frameworks they're written in or the CI systems running them.

Evaluations

Instructions: In depth evaluations should be done in the sub-proposal pages. Read the proposals, you'll find the evaluation sections. Indicate vote below in the voting section

Voting

Every single vote requires an evaluation.

Contributor Packaged Tests Ansible Tests Control Tests Notes
YourUserName +1 This is just an example, please vote for one of the options
flepied +1 Having dependencies at the test granularity plus metadata on the test makes it a more complete solution that is a superset of the 2 other propositions. Added bonus is to be able to re-use and collaborate with other Linux distros.
Roshi +1
Ausil +1 Ansible is a bit more work but I think will give better results and options
pingou +1 Ansible clearly has some down side but I do think it is simpler and can be more powerful than the RPM approach
Stef +1 Tests should be a core part of the distro, hence preference for packaging
Jenny +1 Ansible would be a dependency but it meets the needs better for configuring a system for the tests it is invoking. Packing tests into rpms is an added layer of complexity that is overhead and time consuming. Do not want to go down that path again.
jscotka +0.5 Bigger preference for packaging, because of combining upstream/downstream testing (need proper version of tests for installed packages). But I also like idea of ansible tooling. Not decided yet. There is also third possibility ( I hope it will be added asap)
miabbott +1 The ease of use and amount of available documentation for Ansible are some of it's strongest points for the proposal. Ansible should have better support for provisioning hosts for different kinds of tests.
alivigni +1 The ease of use and amount of available documentation for Ansible are some of it's strongest points for the proposal. Ansible is a general tool used inside and outside of Fedora and is constantly being enhanced with new features. It also allows a common way to drive testing that any CI tool can use.
dustymabe +1 I think ansible gives a balance of simple & sophisticated tooling to enable us to write simple tests or write complex tests. If a user is not familiar with ansible then they can use an example yaml file to just execute a shell script. More advanced users can ramp up to ansible's potential.
Nick Coghlan +1 I started this evaluation expecting to vote for the Ansible option, but changed my mind when I asked myself the question: "Given this approach, how hard would it to be to bootstrap the other?". Given that Fedora and its derivatives are inherently based on RPM, I think the winner on that basis is a packaging based approach, with a helper module and spec file boilerplate to bootstrap Ansible based test environments in libvirt and/or docker for components that can't run their integration tests non-intrusively on the host. This does imply some assumed capabilities for the bootstrapped Ansible environment (1. "Give me a matching local VM"; 2. "Give me a matching container"; 3. "Give me a matching OpenStack VM", 4. "Give me a matching Beaker machine"), but that would be the case even with the Ansible-as-the-baseline option.
gdk +1 Reason one: easier to onboard new test authors because the Ansible approach will be far easier to learn. Reason two: multiple CI options will be easier to unlock down the road with a suite of Ansible tests ready to go. Reason three: you will have a ton of Ansible knowledge right at your fingertips. :)
contyk +1 The RPM approach has so many benefits I don't even know where to start. Every test package states exactly what it requires, along with the exact build it's testing; we get this information automatically and in most cases for free when building the package. Packages from large, standardized ecosystems such as Perl, Python, Ruby and similar make up a large part of our distribution; for most of those, generating the test subpackages could be almost entirely automated; their tests are in a known location, they're invoked in a known way and we have dependency generators for them. Some already have or had macros for exactly this purpose. Currently everyone (and I think this is unlikely to change, and not because of what tools we use) who would be involved in working on tests knows RPM packaging to at least some degree and wouldn't have to learn anything new. I'm sure the package maintainer would be more than willing to help additional interested contributors in extending the suite. Not running tests during the package build would actually simplify packaging as the package author doesn't need to list all the test build dependencies in the main package. The builds run faster since the buildroots are smaller and they don't block builders for the entire duration of the test suite. You could do that with the other two approaches as well but then you also need to modify the RPM package which is what you're trying to avoid. Also, the packager can switch from testing during build to async testing whenever they feel like it without the tests running twice or not at all at any point in time. It just feels natural to me.
tflink +1 I think that there are down-sides to all of these proposals but considering our constraints, I think that starting with ansible is the best option right now.