From Fedora Project Wiki
m (Stefw moved page Talk:Changes/TestSubpackages to Talk:Changes/InvokingTests: Better represents the goals of this change.)
mNo edit summary
(7 intermediate revisions by 5 users not shown)
Line 2: Line 2:
#* MartinPitt: I reworked the invocation; it was also impractical for tests that run as non-root, and it would have potentially clobbered the root directory with temporary stuff.
#* MartinPitt: I reworked the invocation; it was also impractical for tests that run as non-root, and it would have potentially clobbered the root directory with temporary stuff.


# MartinPitt: In Fedora 25 there are currently 153 packages named <code>*-tests</code>, none of which match this specification. Thus "If this file [check file in dist-git] is empty then the list of packages will default to %{name}-tests" does not work for discovery, as it would catch these packages. So we either need to change to a suffix that isn't being used yet, or (preferrable) always explicitly declare test packages which follow ''this'' standard.


# MartinPitt: It seems to me that shipping tests as RPMs and discovery in dist-git are the wrong way around. In the current spec we require packaged tests and (effectively, see above) declaring the test packages in dist-git.
# tflink: As I understand it, the proposal requires <code>-test</code> subpackages to either have globally-unique file names or explicit <code>conflicts</code> in the spec file. Why not use a subdirectory matching the <code>name</code> from the specfile e.g. <code>/usr/tests/gzip</code> for the gzip packaged tests? That would make filename conflicts much less likely and would be one less thing for packagers to worry about when including tests.
#* Test declarations in dist-git are ''not'' discoverable for an automated system that needs to decide which tests to run for a particular package update. For that it needs to be able to tell efficiently which (source) packages have tests and what their names are, and checking out ''all'' Fedora package gits is impractical. Thus this either (1) needs to be moved to a tag or special package name or a magic prefix in the package summary such as "FEDTEST:" (we need to give this thing a name!), so that tests can be discovered from the package index; or (2) we need an automated service which indexes all dist-gits regularly.
#* MartinPitt: Excellent point; spec changed to <code>/usr/tests/</code>''srcpkgname''<code>/</code> to make use of the already unique name space that source packages (aka. spec file names) give us. Will that be sufficient to map a source package to all of its binary packages that contain tests? I. e. "give me all rpms of the <code>gtk+</code> source that provide tests?
#* Always packaging tests as RPMs doesn't fundamentally restrict the scope of this (we can package things like distro upgrade or Anaconda installer tests as new source packages which only ship test subpackages). However, in the vast majority of cases our tests will be platform independent (i. e. written in Python, shell, or other scripting languages) and not require any compiled bits. Of course sometimes they do, and for those the compiled bits can still be shipped in a `-tests` helper RPM. So it would seem prudent to simply ship the tests themselves in the dist-git (<code>tests/*</code> executables), and the test metadata as a separate file. This would then not limit test metadata to what RPMs can express.
 
#* Note that the majority of tests will live in dist-git no matter what - either for running directly, or just as source paths for creating the -tests RPMs.
# pingou:
#* If we continue the "always package tests as RPMs" route, we ''will'' need to put them into a separate archive, similar to -debuginfo, to avoid unduly blowing up the package index for production systems. How much effort is that to set up?
* Execute all executable files in /usr/tests/*/ directories one at a time.  
#* So at the very least this needs justification why packaging tests is desirable over putting them into dist-git directly. The main advantage that I see is that it makes test dependency installation a bit more straightforward for human users. But (1) you should really use a tool for this which cleans up after itself (and then it doesn't matter if that installs RPMs or checks out git), and (2) this prevents running tests on production systems and/or when you don't have root  privileges.
 
#* In case we do decide to ship tests in dist-git instead of RPMs, we still need to tag the SRPMs or RPMs with some "I have tests" marker for CI system discoverability.
This is a neatpick but there will be people complaining about it since /usr/tests isn't in the FHS and isn't really a good place for executable, we could suggest using /usr/libexec which is meant for executable and put a /test subfolder there
 
* I honestly do not see the advantage of packaging the tests. I doubt that for most project upstream is going to release them as a tarball which means the packagers will have to do that. Then write down the process on how to execute them. Why not doing this with something like ansible from the start? It makes it easy to list the dependencies (just install them in one task). Then you would need to specify how the tests should be run, which can be just as easily done in ansible. It also means that we would have to go through the FPC to get this approved in the packaging guideline, for imho little benefit.
 
* I agree with above about not enough benefits with packaged tests. Right now we have already problems with slow dnf, dowloading too much, huge metadata and extra slow dependency solving. This won't be beneficial to our users, yet it would have negative impact on them. Some tool like fedpkg or something will be needed anyway. Same with configs, dependencies, etc... we will need something else, as rpm can't cover everything. So I think it would be good idea to leave rpm out of this and not load another weigh on core infrastructure, processes and user experience.
 
 
[[cevich]]:
 
* Writing standards is hard, particularly because they rarely remain static. However, without a way to check some target suite, framework, or test conforms to expectations, changes to the standard are inherently opposed. The "easy fix" here is to version each layer so that the layer above, can assert expectations.  e.g. a versioned test  can be checked by it's framework, and on up the layers.
 
* Whatever the tooling is, duplicating more than a moderate amount of execution logic across 100s or 1000s of packages is ripe for disaster.  If there's any bug or necessary change, it means fixing the same problem x 1000s of packages.  Worse, over time all the copies will tend to diverge from each-other making it even harder. Part of the standard should include a "library" of routines/roles/files etc. This can then be versioned and therefor asserted (or provided) by higher-layers.  i.e. make the library package a 'BuildRequires' in the spec.
 
* Third-option, including the tooling-choice as part of the versioning standard.  Then you can include all three (packaged scripts, ansible, or control) and add more later. e.g. if tests/VERSION ends with "a", do the Ansible thing.  If it's "b", run the scripts.

Revision as of 14:15, 28 June 2017

  1. AliVigni: In invocation why would I want to hardcode absolute paths for test execution, artifacts, logs, This should be a relative path so where ever you run things it is in the local workspace. My machine, Jenkins, taskotron, etc.
    • MartinPitt: I reworked the invocation; it was also impractical for tests that run as non-root, and it would have potentially clobbered the root directory with temporary stuff.


  1. tflink: As I understand it, the proposal requires -test subpackages to either have globally-unique file names or explicit conflicts in the spec file. Why not use a subdirectory matching the name from the specfile e.g. /usr/tests/gzip for the gzip packaged tests? That would make filename conflicts much less likely and would be one less thing for packagers to worry about when including tests.
    • MartinPitt: Excellent point; spec changed to /usr/tests/srcpkgname/ to make use of the already unique name space that source packages (aka. spec file names) give us. Will that be sufficient to map a source package to all of its binary packages that contain tests? I. e. "give me all rpms of the gtk+ source that provide tests?
  1. pingou:
  • Execute all executable files in /usr/tests/*/ directories one at a time.

This is a neatpick but there will be people complaining about it since /usr/tests isn't in the FHS and isn't really a good place for executable, we could suggest using /usr/libexec which is meant for executable and put a /test subfolder there

  • I honestly do not see the advantage of packaging the tests. I doubt that for most project upstream is going to release them as a tarball which means the packagers will have to do that. Then write down the process on how to execute them. Why not doing this with something like ansible from the start? It makes it easy to list the dependencies (just install them in one task). Then you would need to specify how the tests should be run, which can be just as easily done in ansible. It also means that we would have to go through the FPC to get this approved in the packaging guideline, for imho little benefit.
  • I agree with above about not enough benefits with packaged tests. Right now we have already problems with slow dnf, dowloading too much, huge metadata and extra slow dependency solving. This won't be beneficial to our users, yet it would have negative impact on them. Some tool like fedpkg or something will be needed anyway. Same with configs, dependencies, etc... we will need something else, as rpm can't cover everything. So I think it would be good idea to leave rpm out of this and not load another weigh on core infrastructure, processes and user experience.


cevich:

  • Writing standards is hard, particularly because they rarely remain static. However, without a way to check some target suite, framework, or test conforms to expectations, changes to the standard are inherently opposed. The "easy fix" here is to version each layer so that the layer above, can assert expectations. e.g. a versioned test can be checked by it's framework, and on up the layers.
  • Whatever the tooling is, duplicating more than a moderate amount of execution logic across 100s or 1000s of packages is ripe for disaster. If there's any bug or necessary change, it means fixing the same problem x 1000s of packages. Worse, over time all the copies will tend to diverge from each-other making it even harder. Part of the standard should include a "library" of routines/roles/files etc. This can then be versioned and therefor asserted (or provided) by higher-layers. i.e. make the library package a 'BuildRequires' in the spec.
  • Third-option, including the tooling-choice as part of the versioning standard. Then you can include all three (packaged scripts, ansible, or control) and add more later. e.g. if tests/VERSION ends with "a", do the Ansible thing. If it's "b", run the scripts.