From Fedora Project Wiki

(wwoods thoughts)
Line 28: Line 28:


* The introduction needs to be clear that this is an ''acceptance'' test plan - All these tests have to pass before we can even think about ''functional'' testing of the package.
* The introduction needs to be clear that this is an ''acceptance'' test plan - All these tests have to pass before we can even think about ''functional'' testing of the package.
** ''[[User:Kparal|Kparal]]: I'm little floating in the terminology, because "acceptance testing" is by [http://en.wikipedia.org/wiki/Acceptance_testing wikipedia] also "functional testing" or "<you-name-it> testing". But I understand what you mean, they are just the basic tests and more specific tests to that package will follow in the future.''
** Specifically: it needs to be clear that when an update has '''PASSED''' this test plan, that just means it's ready for ''real'' testing. The actual testing of the update is not complete; it has just barely ''started'' at this point.
** Specifically: it needs to be clear that when an update has '''PASSED''' this test plan, that just means it's ready for ''real'' testing. The actual testing of the update is not complete; it has just barely ''started'' at this point.
** Maybe the final result of the test plan should reflect this: If all the test cases pass, the package is '''ACCEPTED''', otherwise it's '''REJECTED'''.
** Maybe the final result of the test plan should reflect this: If all the test cases pass, the package is '''ACCEPTED''', otherwise it's '''REJECTED'''.
*** ''[[User:Kparal|Kparal]]: This is perfect, much better than my original terminology. Thanks, replaced.''
*** Each test ''case'' can still use '''PASS'''/'''FAIL''', of course.
*** Each test ''case'' can still use '''PASS'''/'''FAIL''', of course.
*** '''NEEDS_INSPECTION''' is fine as-is.
*** '''NEEDS_INSPECTION''' is fine as-is.


* If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code).
* If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code).
** ''[[User:Kparal|Kparal]]: Sure, each test case should have its own page with description, I agree.''


== failed mandatory test can be brought to FESCO ==
== failed mandatory test can be brought to FESCO ==


Seth Vidal has provided me with an idea that if the package maintainers don't agree with a failed mandatory test (they claim it should pass), the issue can be brought to FESCO. FESCO could e.g. grant an exception for that package or deny the request.
Seth Vidal has provided me with an idea that if the package maintainers don't agree with a failed mandatory test (they claim it should pass), the issue can be brought to FESCO. FESCO could e.g. grant an exception for that package or deny the request. -- [[User:Kparal|Kparal]] 11:58, 5 March 2010 (UTC)

Revision as of 11:58, 5 March 2010

jlaska's test ideas

      * All updates must include a new changelog entry - someday I'd
        like to require a bug (or ticket) in the changelog entry, but
        perhaps that's too aggressive now.
      * What MUST sections can we automate from the package review
        guidelines [2]?
      * SPEC file sanity, including ...
              * Proper upstream Source URL included in SPEC?
              * When are changes to %config files are acceptable?
              * Is %defattr defined in the SPEC?
              * Any sanity tests we can do against the %scripts included
                in a spec file
              * How to handle Unapplied %patches?
      * License compat review?
      * Stripped vs unstripped binaries, is there a preference?
      * Validate man pages?
      * What existing *lint tools can we run, and what results are
        acceptable? (rpmlint, elflint, xmllint)
      * Any relationship to the new privilege escalation policy [3]?

[2] https://fedoraproject.org/wiki/Packaging:ReviewGuidelines
[3] https://fedoraproject.org/wiki/Privilege_escalation_policy

wwoods thoughts

  • The introduction needs to be clear that this is an acceptance test plan - All these tests have to pass before we can even think about functional testing of the package.
    • Kparal: I'm little floating in the terminology, because "acceptance testing" is by wikipedia also "functional testing" or "<you-name-it> testing". But I understand what you mean, they are just the basic tests and more specific tests to that package will follow in the future.
    • Specifically: it needs to be clear that when an update has PASSED this test plan, that just means it's ready for real testing. The actual testing of the update is not complete; it has just barely started at this point.
    • Maybe the final result of the test plan should reflect this: If all the test cases pass, the package is ACCEPTED, otherwise it's REJECTED.
      • Kparal: This is perfect, much better than my original terminology. Thanks, replaced.
      • Each test case can still use PASS/FAIL, of course.
      • NEEDS_INSPECTION is fine as-is.
  • If possible, we should have links for each of the listed test cases that outline exactly what's being tested (and/or link to the source code).
    • Kparal: Sure, each test case should have its own page with description, I agree.

failed mandatory test can be brought to FESCO

Seth Vidal has provided me with an idea that if the package maintainers don't agree with a failed mandatory test (they claim it should pass), the issue can be brought to FESCO. FESCO could e.g. grant an exception for that package or deny the request. -- Kparal 11:58, 5 March 2010 (UTC)