From Fedora Project Wiki




What we test

We test modules and their components at various stages of the build and release processes. The tests are meant to be triggered automatically by the infrastructure and test failures should, in most cases, halt the build or release respectively.

Initial testing

This stage includes any tests suitable for running before the modules is submitted for build, for instance:

  • Is the modulemd file valid?
  • Do we provide any API?
  • Do we include any components?
  • Do our module-level dependencies look sane?
  • Any spellcheck failures?
  • Are all the referenced components available?
  • If there we also store the module definition someplace else, do the contents of the modulemd file match with it?
  • Does the description end with a period?
  • And the summary does not?
  • Do all the rationales end with a period?
  • And possibly other checks confirming that the module complies with the module packaging guidelines.

At this point, only the modulemd file is available as test input. This test is triggered by a dist-git event.

RPM-level testing

Every RPM in the is tested and needs to pass at least certain basic sanity checks. RPMs marked as the module's public API are tested more thoroughly.

  • Are the packages valid RPM files?
  • Do they violate any serious packaging guidelines?

For API:

  • The packages fully pass rpmlint-style checks.
  • If this is an update, the API packages don't break API/ABI and their file listings don't change.

The input for this stage are the individual RPMs and the tests are triggered as soon as the koji build finishes.

Compose and integration testing

The module needs to be tested that it works as a whole. At this stage we test the packages again, this time in the context of the whole module.

  • Every single package from the module can be installed.
  • The module repository, together with its module-level runtime dependencies, passes repoclosure.
  • The API, installation profiles and filter only contain packages that are included in this module.
  • The installation profiles don't list any conflicting packages and can actually be installed.

The input for this stage is the RPM repository with all the components in it and this test is triggered once the module is built, i.e. all its components are available.

Deliverable testing

Modules can be delivered in various formats — YUM repositories, containers, ISOs, virtual machine images and more.

We run basic sanity checks on all deliverables to see whether they're not corrupted. Additional format-specific tests may also be executed.

The input here is the module deliverable. This test is run once the deliverable is available.

Interoperability testing

This might be be strictly necessary as modules are meant to be mostly self-contained, however, we could also test application-level modules running with several different Generational Cores, or various composes of the Generational Core, if applicable.

This is a grey area and needs to be investigated more.

How we test it

Our tests are written in Python using the Avocado testing framework. The test execution is performed by Taskotron whenever a relevant build or release message is received.

The results are stored in results database. Modularity build and release services may inspect the results and respond accordingly.

Where are the tests stored

The tests are stored in dist-git, in the test-rpms/* and test-modules/* namespace, depending on the nature of the test.

Base Runtime specific tests

Base Runtime (and the whole Generational Core) is a module just like any other. However, given its importance the test coverage should go well beyond the generic module testing and focus heavily on component quality and functionality checks. For example:

  • Ensure all the installed files are in correct locations
  • Ensure we ship manual pages for all executables
  • Per-application functionality checks
  • And more