From Fedora Project Wiki

The kernel testing initiative aims to increase the quality of the Fedora kernel though both manual and automated testing in. We plan to build a framework that will:


  • Allow users to easily run a regression test suite on bare hardware to catch driver issues
  • Provide automated kernel regression testing against every build
  • Provide automated performance testing of new kernel releases as specified.

As we are in the early stages of getting this into place, more details will be fleshed out as we get them. You can always check our progress on the bottom of this page.

Regression Testing

The goal of this is to have a simple regression test suite than any user can run against their running kernel. The tests should be fast, and non destructive. If there are certain tests with destruction potential, they should be marked separately so that they will not run in the common case, but can be run as part of an extended run where the user doesn't fear data loss. These tests will be run as part of the automated testing process, but should be easy for anyone to run without having to set up autotest. A number of single check regression tests should be created to test for common regressions in the kernel. These can be individual executable tests with the most important criteria that each test have a common reporting format as explained in KernelRegressionTestGuidelines. As the results of potentially dozens of tests blow by quickly, it is important to be able to quickly identify failures. They will be called by a master test script from a kernel-testing subpackage or possibly a makefile within the fedora kernel git tree. As many of the individual tests will be driver related, the control file can check to see that a specific driver is loaded, and simply skip the test if it is not or if hardware is not available.

Automated Testing

The Fedora Message Bus shares information on completed builds. Using a client sitting on the fedmsg bus, we can get instant notification of a completed build. This allows our KVM host to launch the appropriate guests and begin testing immediately. Currently, all builds are being tested with the regression test suite. More details can be found in the Flock Presentation.

Finally, we want to be able to tie in automated performance workloads for testing on specific builds. This will allow us to catch performance regressions more easily, but as these will be more heavyweight tests, we do not want to waste cycles running them on debug kernels or minor updates. An automated performance regression test framework will be in place soon to allow performance testing of non debug rawhide kernels, with fairly strict controls to ensure kernels are fairly compared.

Performance Testing

More in-depth performance testing should be tied into autotest. As this is the last phase of the project, many of the requirements have not been set. There are a few key elements that we do know:

  • Testing should use a common platform with kernel being the only changes to provide meaningful results.
  • Testing should be limited to release kernels, there is no benefit to performance testing on debug kernels
  • Comparative results should be graphed for quick verification.

Status

  • Regresssion testing: See KernelRegressionTests Currently coming up with a list of test cases to be written. The framework is now on fedorahosted.
  • Automated Testing: working with upstream to get the guest koji builds module included. Documentation is in progress. Config file integration still to do.
  • Performance Testing: Not yet started