UpdatePolicy(draft)

There are currently several competing proposals for what an update policy should look like. Listing them all on this page.

Introduction
We assume the following axioms:

1) Updates to stable that result in any reduction of functionality to the user are unacceptable.

2) It is impossible to ensure that functionality will not be reduced without sufficient testing.

3) Sufficient testing of software inherently requires manual intervention by more than one individual.

Proposal
The ability for maintainers to flag an update directly into the updates repository will be disabled. Before being added to updates, the package must receive a net karma of +3 in Bodhi.

It should be noted that this does not require that packages pass through updates-testing. The package will appear in Bodhi as soon as the update is available. If sufficient karma is added before a push occurs, and the update is flagged for automatic pushing when the karma threshold is reached, the update will be pushed directly to updates.

It is the expectation of Fesco that the majority of updates should easily be able to garner the necessary karma in a minimal space of time. Updates in response to functional regressions should be coordinated with those who have observed these regressions in order to confirm that the update fixes them correctly.

At present, this policy will not apply to updates that are flagged as security updates.

The future
Defining the purpose of Fedora updates is outside the scope of Fesco. However, we note that updates intended to add new functionality are more likely to result in user-visible regressions, and updates that alter ABI or API are likely to break local customisations even if all Fedora packages are updated to match. We encourage the development of mechanisms that ensure that users who wish to obtain the latest version of software remain able to do so, while not tending to introduce functional, UI or interface bugs for users who wish to be able to maintain a stable platform.

Requirements for Critpath packages
Rationale: Expedited security fixes have caused some serious regressions in the past (D-Bus, Bind, Thunderbird updates etc).
 * Must go through updates-testing repository
 * Only major bug fixes and security fixes.
 * Must go through updates testing repository even for security fixes
 * Requires QA team to sign off on these updates and I will leave them to define the criteria for it. I believe the criteria should be based on feedback from testers rather than the number of days.
 * Exceptions or expedited update requests must go via release engineering

Critical_Path_Packages critical path packages list

Requirements for Non-critical path packages

 * Don't blindly push every upstream release as update
 * Preserve stability and avoid unexpected changes and push updates with enhancements only if the benefit is considered worth the risk of potential regressions

Recommendations

 * Run AutoQA on all updates
 * Hookup PackageKit to updates-testing repo and allow users to opt-in and provide feedback easily
 * Evaluate extending the criteria based on how well we succeed with a more conservative update policy for critical path packages

Introduction
We assume the following axioms:

1) Updates to stable that result in any reduction of functionality to the user are unacceptable.

2) It is impossible to ensure that functionality will not be reduced without sufficient testing.

3) Sufficient testing of software inherently requires manual intervention by more than one individual.

Proposal
For a package to be pushed to the stable updates repository, it must meet the following criteria. These criteria can be considered as separate proposals that are stacked on top of each other.

 All updates (even security) must pass acceptance criteria before being pushed.

Rationale: ''If a package breaks dependencies, does not install, or fails other obvious tests, it should not be pushed. Period. Obviously, this proposal would not be enacted until AutoQA is live.''

The list of tests will be:  Packages must not break dependencies Packages must not break upgrade path Packages must not introduce new file/package conflicts Packages must be able to install cleanly  Additional tests will be set by FESCo with input from QA. As a discussion point, some subset of these tests could be run on pending updates, and the results used to block updates going to updates-testing as well.



Updates that constitute a part of the 'important' package set (defined below) must follow the rules as defined for critical path packages for pending releases, meaning that they require positive karma from a defined group of testers before they go stable. This also includes security updates for these packages.

The 'important' package set is defined as the following:


 * The current critical path package set
 * All major desktop environments' core functionality (GNOME, KDE, XFCE, LXDE)
 * Package updating frameworks (gnome-packagekit, kpackagekit)
 * Major desktop productivity apps. An initial list would be firefox, kdebase (konqueror), thunderbird, evolution, kdepim (kmail).

We can generate this list in the same way as we currently generate the critical path list (or we can redefine critical path to be this list). Changes to this criteria would be done by FESCo or their delegate.

Rationale: ''These are the sets of packages where regressions most affect users, and would most prevent them from Getting Their Work Done. Furthermore, while I can accept that there may be some packages in Fedora that cannot find a significant enough testing base for all potential updates, I reject the notion that any desktop widely used enough that we deploy a image or spin for it would fit into that category. I accept that this places a larger burden on QA, and would expect them to be able to contribute testing to this initiative. How to denote the group of testers is yet to be determined; QA is investigating this. (For pending releases, we do QA or releng)''



All other updates must either:


 * reach the criteria laid out in section 2
 * reach their specified positive bodhi karma threshold
 * spend some minimum amount of time in updates-testing

Proposed time would be one week, but is open to negotiation. We can track downloads with our one Fedora-infrastructure controlled mirror as mechanism to see what usage the package is getting. Rationale: ''We do want additional eyes on updates wherever possible. We do have one Fedora mirror that Fedora infrastructure controls; we should be able to mine this server for data on updates-testing downloads.''   Any update that wants to bypass these procedures would need majority approval from FESCo.

Critique
Hans de Goede : I like this it seems a well balanced proposal. I think the one AdamW proposed which merges this one with the one kparal did is even better, but as I understand FESco has chosen to move forward with this one. So that is why I'm providing feedback here. All in all a good proposal, but can we please drop the: ", with a tracked number of downloads." part of the criteria for non important packages, this is going to force niche packages (ie cross compiler tool chains, or other packages with small user sets) to stay in updates-testing for a very long time.

Kevin Fenzi: I like this proposal, just a few comments:
 * Should we try and set some kind of timeline for point 1? Can we get any idea when basic autoqa might be ready?
 * What about a security update for a package not in the "Important" set? From my reading of this it's not covered?
 * Do we have any way to list the "important set" here? Would be good to know what we are covering
 * Might we want to change the "Rel-eng/QA" to "proventesters" or whatever? Also, should we decide if that should be an "and" or an "or" there? Do they need +1 from each? or just +1 from either? We should try and confirm each of the major importantpackage areas have a proventester or whatever to do this testing.
 * We might want to have a note about how to add/remove things from the "important set". Ie, we might want to add apps or remove them or the like. There should be a way for packagers to request we do this.
 * Might specifiy "Positive threashold" in point 3 to avoid someone setting karma needed to 0.
 * We should decided what info we can get from 'tracked downloads' and decide how much that matters. I think at least to start, we might want to just set this to: anyone at all has downloaded it. ie, if it's 0 there is a problem.
 * finally, should we determine a time line/checklist for implementing this if we approve it? (when any code changes are done, when it should be announced, when it goes live, etc).

Kamil Páral:
 * Point 1.4 could be replaced by Test Package Sanity requirement. Not only package install must work, but also removal, upgrade, etc.
 * Regarding point 2: Will there be any required minimal amount of positive karma the updates must meet (or single +1 will suffice)? I don't ask for a concrete value, just the generic answer.
 * Regarding point 2: Will there be any minimal amount of time the updates must spend in updates-testing, regardless of karma? It may be much shorter than for "all other updates". Still I think it's a good idea, because single +1 few hours after push to testing does not mean that there can't be two -1's few hours later. And here we talk about _important_ packages.
 * Regarding point 3: The positive karma threshold may be set by package maintainers or will it be fixed? Or it can be raised, but can't be lowered below some minimal value?
 * I miss some workflow overview similar to the one in my proposal. Who is the final guardian ensuring that all requirements are met? I suppose it's RelEng. Of course a lot of checks will be solved by our tools (Bodhi will ensure that an update passed AutoQA tests or received an exception from FESCo, etc), but still there may be things to check manually before final push to stable. For example considering whether the update type is one the allowed update types. Who will do that and when?
 * Regarding AutoQA tests: We, the QA, would like to provide not only mandatory tests for packages (as stated in point 1), but also some additional tests, let's call them "introspection". These introspection tests could fail the overall QA acceptance result, but they may be waived by the maintainer if he/she thinks they are not important/false alarms. I don't know if this is important enough to be mentioned in the proposal or not (in some kind of short general sentence), so I'm raising the issue here.