In order to use the Flexible Metadata Format effectively for the CI testing we need to agree on the essential set of attributes to be used. For each attribute we need to standardize:
- Name ... unique, well chosen, possibly with a prefix
- Type ... expected content type: string, number, list, dictionary
- Purpose ... description of the attribute purpose
For names we should probably consider using namespace prefix (such as test-description, requirement-description) to prevent future collisions with other attributes. Each attribute definition should contain at least one apt example of the usage. Or better, a set of user stories to be covered.
In this section there are attributes proposed so far. Material for discussion. Nothing final for now.
In order to efficiently collaborate on test maintenance it's crucial to have a short summary of what the test does.
- Name ... summary
- Type ... string (one line, up to 50 characters)
- Purpose ... concise summary of what the test does
- As a developer reviewing 10 failed tests I would like to get quickly an idea of what my change broke.
- Shall we recommend 50 characters or less? Like there is for commits? Yes.
summary: Test wget recursive download options
For complex tests it makes sense to provide more detailed description to better clarify what is covered by the test.
- Name ... description
- Type ... string (multi line, plain text)
- Purpose ... detailed description of what the test does
- As a tester I come to a test code I wrote 10 years ago (so I have absolutely no idea about it) and would like to quickly understand what it does.
- As a developer I review existing test coverage for my component and would like to get an overall idea what is covered without having to read the whole test code.
description: | This test checks all available wget options related to downloading files recursively. First a tree directory structure is created for testing. Then a file download is performed for different recursion depth specified by the "--level=depth" option.
Throughout the years, free-form tags proved to be useful for many, many scenarios. Primarily to provide an easy way how to select a subset of objects.
- Name: tags
- Type: list
- Purpose: free-form tags for easy filtering
- Tags are case-sensitive.
- Using lowercase is recommended.
- A a developer/tester I would like to run only a subset of available tests.
tags: [Tier1, fast]
This is the key content attribute defining how the test is to be executed.
- Name: test
- Type: string
- Purpose: shell command which executes the test
- As a developer/tester I want to easily execute all available tests with just one command.
- As a test writer I want to run a single test script in multiple ways (e.g. providing different parameters)
As the object hierarchy does not need to copy the filesystem structure (e.g. virtual test cases) we need a way how to define where the test is located.
- Name: path
- Type: string
- Purpose: filesystem directory to be entered before executing the test
- As a test writer I define two virtual test cases, both using the same script for executing. See also the Virtual Tests example.
Note: Automation is expected to do a
cd $path starting from the fmf tree root directory before executing the test. Thus relative path should be used if your tests reside under the fmf tree, absolute path when you want to reference tests elsewhere on the filesystem.
Test scripts might require certains environment variable to be set. Although this can be done on the shell command line as part of the Test attribute it makes sense to have a dedicated field for this, especially when the number of parameters grows. This might be useful for virtual test cases as well.
- Name: environment
- Type: dictionary
- Purpose: environment variables to be set before running the test
- As a tester I need to pass environment variables to my test script to properly execute the desired test scenario.
- As a tester I'm using a single test script for testing different Python implementations specified by environment variable PYTHON.
environment: PACKAGE: python37 PYTHON: python3.7
In order to prevent stuck tests consuming resources we should be able to define a maximum time for test execution.
- Name: duration
- Type: string
- Purpose: maximum time for test execution after which a running test is killed by the test harness
- Let's use the same format as the
command. For example: 3m, 2h, 1d.
- As a deverloper/tester I want to prevent resource wasting by stuck tests.
- As a test harness I need to know after how long time I should kill test if it is still running.
Sometimes a test case is only relevant for specific environment. Test Case Relevancy allows to filter irrelevant test cases out.
- Name: relevancy
- Type: list
- Purpose: Test Case Relevancy rules used for filtering relevant test cases for given environment.
- As a tester I want to skip execution of a particular test case in given test environment.
Environment is defined by one or more environment dimensions such as
component. Relevancy consists of a set of rules of the form
condition: decision. For more details see the Test Case Relevancy documentation.
relevancy: - "distro < f-28: False" - "distro = rhel-7 & arch = ppc64: False"
When there are several people collaborating on tests it's useful to have a way how find who is responsible for what.
- Name ... contact
- Type ... string or list of strings (name with email address)
- Purpose ... person(s) maintaining the test
- As a developer reviewing a complex failed test I would like to contact the person who maintains the code and understands it well.
contact: Name Surname <firstname.lastname@example.org>
There could be multiple persons involved in the test maintenance as well:
contact: - First Person <email@example.com> - Second Person <firstname.lastname@example.org>
It's useful to be able to easily select all tests relevant for given component or package. As they do not always have to be stored in the same repository and because many tests cover multiple components a dedicated field is needed.
- Name ... component
- Type ... list of strings
- Purpose ... relevant fedora/rhel source package names for which test should be executed
- As a SELinux tester testing the
checkpolicycomponent I want to run Tier1 tests for all SELinux components plus all checkpolicy tests.
component: [libselinux, checkpolicy]
fmf command can be used to select test set described by the user story above:
fmf --key test --filter 'tags:Tier1 | component:checkpolicy'
It's quite common to organize tests into "tiers" based on their importance, stability, duration and other aspects. For this tags have been used quite often as there was not corresponding attribute available. It might make sense to have a dedicated field for this functionality as well.
- Name ... tier
- Type ... string
- Purpose ... name of the tier set this test belongs to
- As a tester testing a security advisory I want to run the stable set of important tests which cover the most essential functionality and can provide test results in a short time.
In some cases tests have special requirements for the environment in order to run successfully. For now just simple qemu options for the standard-inventory-qcow2 provisioner are supported.
- Name ... provision
- Type ... dictionary
- Purpose ... set of environment requirements
- As a tester I want to specify amount of the memory which needs to be available for the test.
- As a tester I want to specify network interface card to be used in qemu.
provision: standard-inventory-qcow2: qemu: m: 3G net_nic: model: e1000
Memory size is specified in megabytes. Optionally, a suffix of “M” or “G”. Use qemu-system-x86_64 -net nic,model=help for a list of available devices. See also real life example usage.
Three beakerlib tests, each in it's own directory:
test: ./runtest.sh /one: path: tests/one /two: path: tests/two /three: path: tests/three
tests/one path: tests/one test: ./runtest.sh
tests/two path: tests/two test: ./runtest.sh
tests/three path: tests/three test: ./runtest.sh
Three different script residing in a single directory:
path: tests /one: test: ./one /two: test: ./two /three: test: ./three
tests/one path: tests test: ./one
tests/two path: tests test: ./two
tests/three path: tests test: ./three
Thre virtual test cases based on a single test script:
path: tests/virtual /one: test: ./script --one /two: test: ./script --two /three: test: ./script --three
tests/one path: tests test: ./script --one
tests/two path: tests test: ./script --two
tests/three path: tests test: ./script --three