Fedora 13 QA Retrospective

Introduction
This page is intended to gather feedback from the Fedora QA community on things that worked well and things that could have been better with the testing of Fedora 13. The feedback will be used as a basis for identifying areas for improvement for Fedora 14 testing. Any thoughts, big or small, are valuable. If someone already provided feedback similar to what you'd like to add, don't worry ... add your thoughts regardless.

For any questions or concerns, send mail to test@lists.fedoraproject.org.

Providing feedback
Adding feedback is fairly straight forward. If you already have a Fedora account ...
 * 1) Login to the wiki
 * 2) Select [Edit] for the appropriate section below.
 * 3) Add your feedback using the format: *  - I like ____ about the new ____ process
 * 4) When done, Submit your changes

Otherwise, if you do not have a Fedora account, follow the instructions below ...
 * 1) Select the appropriate page for your feedback...
 * 2) * Something that worked well
 * 3) * Something that didn't work well
 * 4) * Anything on your QA wishlist
 * 5) Add your feedback using the format: *  - I like ____ about the new ____ process
 * 6) When done, Submit your changes

Things that went well

 * jlaska - Release Criteria - Having release criteria that have been reviewed and accepted by key stake holders really expedites testing by improving bug escalation time and uncertainty regarding user impact. Instead of spending precious time debating the merits of each bug report, we can collectively debate the criteria ... and adjust as needed.
 * rhe - Trac Ticket for creating milestones - efficiently track the status of isos.
 * rhe - key of test results - more people can easily provide test results in one test case.
 * rhe - F-13-Beta test runs - highlighted test cases that needed focus in the test plan using bold text
 * rhe - F-13-Beta test runs - Use of #REDIRECT pages (e.g. Test_Results:Current_Installation_Test and Test_Results:Current_Desktop_Test) and Category links instead of real ones (e.g. Beta TC results and Beta RC results).
 * jlaska - Nightly live images - Having them is extremely helpful! QA relies on them during test days, for bug verification and for milestone testing when official media is unavailable.
 * jlaska - Scheduled install acceptance test runs - The F-13 schedule included several install acceptance test runs prior to each milestones test compose. Prior to having the Branched install images available (pre-freeze), these were very helpful to identify Alpha install blockers earlier than the 'test compose'.  I've added this to the wishlist section as well, since there is still room to improve this process.  See results Pre-Alpha#1, Pre-Alpha#2, Pre-Alpha#3, Pre-Beta#1, Pre-Final#1
 * John Poelstra -- Heroic resolution of 67 blocker bugs between 2010-04-30 and 2010-05-06. I believe part of this success was due to the constant messaging and updates as to where we were at and the contingency plan that would have to be enacted if we were not successful.
 * jlaska - Community participation - I Don't have numbers in front of me, but the installation test events for F13 were way up, especially for the Final TC and RC phase.
 * rhe - Announcement in time - Since the builds were posted in a random time as long as it's available, it's better to have more than one person to cover 24 hours a day so that the announcement could be sent out in time.
 * jlaska - Test planning - I really liked having multiple test matrices to track against (both desktop and installation). Also, using milestone specific categories (like Category:Fedora_13_Final_RC_Test_Results and Category:Fedora_13_Final_TC_Test_Results) really helped organize the content, and having direct links to the current test matrix was awesome (as stated above).
 * dramsey - The Features Test Cases and Test Results Matrices - Use of effective pages (e.g. from Test_Day:2010-02-04_NFS to Test_Day:2010-04-01_ABRT .) I went through eight of your fifteen test days.  Lots of fun.
 * Looking back in time, I see there were about five open slots. Probably a feature was not ready for review, but consider an open slot is a lost opportunity as well as lost momentum to provide energy to keep the wheel of progress moving forward.
 * Respectfully, keeping the Test Day enjoyable as well as achievable and easy to follow were key for me.
 * Another consideration for the future based upon this would be to touch base with the people "who would be doing the testing" and consider integration of their idea(s) for your competitive advantage, for example more i18n and dual boot considerations as well as building upon previous test cases as a sort of encompassing approach.  There may be merit with those ideas indeed.
 * rhe - Trac Ticket for each test day - Awesome idea to have a trac ticket for each test day and show it on the schedule.

Could have been better

 * jlaska - freeze date between TC and RC - Because the alpha development freeze takes places after the test compose, the first alpha release candidate had a lot of change and was dead on arrival (see thread discussing changes).
 * jlaska - live images - Not having daily Live images available for test left several Live installer bugs lurking until late.
 * jlaska - firstboot modules - We don't have a way to know what firstboot modules are expected, we need to document this as release criteria, or write a test to report the issue (for details, see ).
 * jlaska - test days - Printing test day did not have as many participants as we anticipated and hoped. How come?
 * rhe - schedule - some candidates were not available as expected to be available on time. Trac tickets were created by releng every time before the events, but it is not easy to tell the status of candidates from the tickets.  Is there anything else to do to make sure it's uploaded as scheduled next time?
 * Kparal - test days - ABRT test day did not have large attendance. Maybe we shouldn't organize test days around Easter holidays?
 * jlaska - Beta RC3 testing - Beta RC3 didn't include the correct version of . Thankfully, the test matrix caught the problem (see ), but what if we didn't run that test?  That's an easy test to skip.  This implies that any changes to plymouth may impact encrypted partition passphrase entry.
 * pfrields - Schedule and QA - One-week slip for Alpha was supposed to echo down through the schedule, as documented on the Fedora 13 Alpha Release Criteria page. At least one QA team member indicated the shorter time frame contributed to inability to find and resolve Beta blockers.
 * Kparal - network disabled by default - As per bug 572489 the network interfaces will be disabled by default in F13 in most cases (installations from CD/DVD/USB). That's a very unfortunate default setting, especially for Fedora newcomers. We should update our Fedora Release Criteria to ensure we can mark this problem as release blocker (probably Alpha) next time. Automatically working internet connection is so basic assumption for most users that we shouldn't ship Fedora doing the opposite.
 * rhe - test plan - Though some test cases are marked as Beta priority, they still didn't block beta release, such as repository cases. (adam nb: this is because the bugs exposed caused the test to 'fail', but don't really break the underlying release criteria. I think we could perhaps track this type of 'failure' specifically in the results tables).
 * rhe - Virt test day - Too many test cases and features for one event, perhaps would be better as a test week or a test day with smaller focus?
 * dramsey - As noted above, Virt test day was not to my advantage. In fact, I was like thinking, there is like no way I would be able to accomplish this during my day off.  It turned me "off" like a lightbulb, even though I love virtualization.  Isn't it ironic?
 * I would ask that a test week be considered in order to accomplish all the test cases,
 * scale to fit the single test day structure, or
 * cut into individual bitesize chunks.
 * liam - Test Day - Some test cases need to be improved, none of these cases were executed in Test Day. https://fedoraproject.org/wiki/Test_Day:2010-04-08_Virtualization_VirtioSerial, I have filed a ticket to tract it(https://fedorahosted.org/fedora-qa/ticket/61 ), but did not find people to do this.Test cases seem not be written by Amit.
 * liam - Install testing - if we can use USB flash disk instead of Burning CD/DVD/liveCD, will involve more people in install testing. We shall ask the Relengineer to build media to support USB boot and install. During each round of testing, we have to burn 2 DVD(i386 and x86_64),4 CD(cd1/cd2),2 liveCD. If we can USB to test install, the booting from CDROM testing can be done by virtual machine. We will stop burning disk.
 * jlaska - milestone tracking - Several people in QA act on the rel-eng deliverables associated with test milestones (Alpha, Beta, RC). The tickets to track these events were often created on the day of the event, or after.  Having these tickets created for all milestones at the start of development would be helpful.
 * adamwill - Blocker request? - During Fedora 13, there was no clear way to note that a blocker request had been reviewed and approved. Several installer bugs slipped through the cracks and were waiting for some confirmation that they had been approved blockers (e.g. ).  Currently, we only "approve" blockers by way of a updated comment after blocker review.
 * jkeating - Bodhi autocloses bugs - When bodhi manages the bugs, it will close things as soon as they hit stable, regardless if they fix the issue or not. We'll likely need some separate way to make sure we get verification that the bugs are indeed fixed with the update.
 * rhe - Test day - The feature which requires special or high-end devices, like raid, iscsi, multipath storage is not suitable to be held as a test day, since most community members don't have such devices.
 * John Poelstra -- Hound blocker bug owners earlier and more aggressively so that the end does not have to be such a mad dash.
 * AdamW -- we didn't seem to have any kind of formal freeze close to release time, even for critpath. We may have been rejecting non-blocker critpath updates, but I don't remember such being put on the schedule or announced. I think it's fine to leave non-critpath, but we should have a formal freeze for critpath around the time the TC is done, I think.
 * John Poelstra -- some of this was due to the removal of the Alpha and Beta Freeze milestones which were deemed to no longer apply because of No Frozen Rawhide. I would advocate adding them back.
 * jlaska - Install test plan TUI/GUI - The test plan details some tests that must be performed in text-mode and graphical, but there are some cases that need to be tested in both. We can either recognize that there are these gaps, or add a lot more test cases to explicitly call out the cases that need both GUI and TUI verification (see ).
 * AdamW - better messaging around deadlines for community testers; we got a lot of testing on RC2 that's really hard to use as we have to slip for _any_ fix, we should have communicated better that this info really needs to come from TC1/RC1 testing
 * John Poelstra - I would like to work with QA to add as much additional detail as they would like to the weekly schedule emails. Might also be good to include the specific test days as well (jlaska suggested this to me before, but I didn't think it was that valuable at the time--now I see he was right).
 * jlaska - Dual-boot Install Expectations - Due to, the dual-boot experience was not well understood and tested for the final release. It was discovered late and unclear whether this behavior was critical to Fedora success.
 * jlaska - Install test plan - Our current test matrix tests that boot.iso, CD and DVD booting works, and that CD, DVD, HTTP, NFS, HDISO installation sources work. The test plan however doesn't specify all permutations of boot method (CD,DVD,boot.iso,pxeboot) + package repository (CD,DVD,URL,NFS).   was found by testing  and  .  That test isn't explicitly called out in our test matrix.  The current NFS test isn't specific about how you boot the installer, it assumes PXEboot.  Perhaps we should fill this gap in the test matrix, or stop supporting all these installation methods.
 * rhe - Desktop Matrix - XFCE and LXDE are often blank without testing. Maybe need encourage testing on these? Or combine them together? Or just take them out from the matrix?
 * rhe - Local language install - Currently we don't have i18n installation cases, so local language install is not tested as a demand. Untranslated pages are still existing after RC test. Such cases needed be added in test as well as release criteria.
 * rhe - Install Test Cases - Some cases are not essential, and the steps of some other cases are improper any more. A review of all test cases are needed.
 * rhe - Install Test Cases - Quite a number of testers would install without using disc media. Should we consider adding cases such as Install Source USB Drive or Grub install?
 * jlaska -  - Conditions leading to  not being fixed in Fedora 13
 * Cause: Nice to have fixes (aka F13Target) are dependent on maintainer to resolve
 * History:
 * Bug filed and found during F12 or F13 testing
 * Beta blocker status requested -- keyword F13Beta added
 * Reviewed and considered a nice to have fix -- Changed keyword to F13Target
 * Bug fixed in F14 and status changed to MODIFIED
 * Recommendation(s): Unclear
 * jlaska -  -Conditions leading to not being fixed in Fedora 13 --
 * Cause: not including MODIFIED Blocker bugs in QA unresolved blocker emails
 * History:
 * Bug found and filed during F13-Alpha testing
 * Final blocker status requeseted -- keyword F13Blocker added
 * Fixed in F14 and status changed to MODIFIED
 * Found again during F-13-Final-TC1 testing and requested to cherry-pick F14 fix into F13.
 * Recommendation(s): See

Wishlist

 * Kparal - care about low-bandwidth testers - we should improve QA processes' accessibility even for low-bandwidth users. That means primarily use existing tools for lowering download requirements of individual release milestones: deltaiso, zsync (2).
 * Kparal - test days calendar - we could create and maintain a web calendar containing all Test Days (maybe even other QA activities?), that people could add to their calendar program and be notified when a new event is happening (not everybody follows the announcements in MLs and also they are announced few days in advance so people may easily forget about it -- my case). Just another way to achieve a little higher participation.
 * rhe - I remember this idea was also proposed on F12 retrospective. A big schedule with all test events is easy for testers to prepare and participate.
 * Pfrields - Reward testers -- we should reward repeat (or frequent) community QA/testers with a 4 or 8 GB USB key and maybe another gift. I have a small pot of money we can use for this, if Fedora QA team folks are not able to just do things like this.  (I would advocate that Jlaska have a small pot of money for this, amount TBD.)
 * rhe - Agree with the notes above. Gifts can also be T-shirts, cups etc with fedora signs.
 * Mcepl - boot.fedoraproject.org -- it could be helpful for everybody present to deploy test-day images to http://boot.fedoraproject.org; not sure about bandwidth requirements for that, but it shouldn't be worse than everybody downloading whole images.
 * rhe - community involvement - Would love to have someone outside the core team help announce or host a planned test run (Desktop, Installation or other).
 * rhe - it comes true now with the help of Andre who lead final validation test events. Hope to keep this going.
 * rhe - care about low-bandwidth testers - Delta-ISO's made officially available.
 * robatino - there's a ticket for this, for development releases. I'd also like to see it happen for mass releases (Final->Alpha at Alpha release, Final->Beta and Alpha->Beta at Beta release, Final->Final and Beta->Final at Final release).
 * Pfrields - test day incentive - One idea, if you have "repeat testers" we might want to reward them with a 4-8 GB USB key and maybe another gift?
 * Kparal - easily available links to current QA activities - I would like to see floating frame in some of our important wiki pages (like QA) containing list of all current activities. That means that the list could contain link to current installation test, current (or very near) test day, today's meeting, etc. Just all the stuff where people can get involved. The list of activities would have to be probably manually updated, but I think it should be worth it. I got this idea when I was in LiveCD environment, I opened QA wiki page, and it was nearly impossible to find a link to current installation testing. When it was impossible for me, what about the public?
 * Kparal - participate in Summer Coding - I have just found out about event Summer Coding 2010. It would be great if we could create a few ideas for summer student projects (for example related to AutoQA, but maybe also other activities) and let the students work with us to finish the task. We gain more manpower and the possibility that the student will stay in QA even after the task is done.
 * Pcfe - test day timing - Give ~one week advance notice for test days
 * Pcfe - test day reasoning - Record reasons for reboots between tests. A lot of the Xorg-x11-drv tests require reboots, in many cases using a liveCD can lead to a really long boot time.  Perhaps each case that requires reboot could more clearly explain why boot is needed (fresh module set, or enabling or disabling KMS etc..).
 * Pcfe - test day page groupings - Pcfe didn't like having to click multiple pages, would be nice to have a single printable page with all test case instructions. Stickster points out, this can be accomplished by making a page that transcludes all the individual test instructions for a day in one place.
 * Pcfe - make feedback easier - currently we track test results in a wiki table, while it looks nice it is annoying and error-prone to edit.
 * jlaska - installation testing - I'd like to propose removing all the RAID tests and replacing them with 1 or 2 general RAID tests. We aren't hitting any problems anymore that are specific to RAID0 but work on RAID5.  The main focus for this tests is to make sure that we identify a real-world RAID install partitioning setup, anaconda can execute that partition scheme, and dracut can boot it.
 * jlaska - i18n installation testing - What does the Fedora i18n team test, and what don't they test? Can we better coordinate with them?
 * jlaska - Test Day Help - identify and encourage a group of participants to act as test day specialists (perhaps with office hours). They can sign up and be available during times at different test events.  They understand the basics of Fedora and know where to go to find documentation.  They would be the first level of triage for test day problems.
 * jlaska - Blocker Reviews - These are too time sensitive for the team. Can we somehow improve this to make it scale better?  Perhaps improved guidelines for having the bug assignee and reporter negotiate bug blocker escalation.  The QA+RelEng team would only review issues where things are unclear.  Spending 4 hours reviewing blocker bugs on IRC doesn't seem like the best use of time, is it?
 * jlaska - Last known good - Prior to the Alpha and the Branched compose availability, we built custom composes and ran them through the QA:Rawhide_Acceptance_Test_Plan to determine whether the compose was good enough for general use. We maintained a symlink pointing to the last known good install source.  Once the Branched install images were available, this process no longer made sense.  We need to figure out how to make last known good meaningful for the installer.  This includes figuring out how to get new anaconda packages built, submitted to bodhi, when does the compose happen, and where does it pull content from (updates or updates-testing?).  Do the automated install tests run and provide positive bodhi karma for anaconda?
 * robatino - Installation test improvement - Some install tests are subsets of others - for example, if I do a default graphical install from the DVD, then I can do BootMethodsDvd, PackageSetsDefaultPackageInstall, Anaconda autopart install, and Anaconda User Interface Graphical simultaneously. If using a virtual guest, I can also throw in Anaconda partitioning uninitialized disks.  Some way of grouping results to demonstrate this relationship could help testers.
 * Some new tests could be thrown in that could be subsets of existing tests so they wouldn't make the testing take any longer. For example, when entering either the root or firstboot password, a weak password could be tried first, to make sure the warning is generated, then a strong one.  The time-consuming part of these tests is waiting for a large number of packages to be installed.
 * Testcase Mediakit Repoclosure and Testcase Mediakit FileConflicts should be done together since the mount part of the instructions is exactly the same.
 * Some tests such as Testcase Anaconda rescue mode and the save traceback tests are very quick since they don't involve actually installing. The rescue mode test requires an existing install, the others don't.
 * wwoods - Better bug reporting - We need to better advertise how to self-diagnose problems (similar to Category:Debugging), instead of filing a bug first, discuss the problem on the mailing list. Do we have the right information listed to guide problem analysis?  First line of defense should be the test@ mailing list to collaboratively solve problems.
 * jlaska - Pony - some web tool to help collaborative problem debugging. Similar to http://answers.yahoo.com/ perhaps?
 * jlaska - Internationalization - We keep having i18n issues (See ). It's not clear how a language and keymap setting are supposed to propagate throughout the OS after install.  For example, should that value be used by dracut for passphrase entry?  GDM for login and password entry?  X etc...?  It would be great for folks to sit down and clear up expectations.
 * dramsey - ''Internationalization and virtual machines. More support for i18n.  For virtual machines as an idea, as the tester tests, with an error that is captured, another person may view the content via their own browser.  Sort of kills two birds with one stone, system that is.  :)  Depends on if you are interested in receiving feedback relevant to an overall system.
 * Second thought, consider a test day / test week during mid-way in your schedule to do pseudo encompassing system test set via iso's.  Sort of a see if what was fixed half way consideration.  For example, some bug #12345 broke in NFS on week 1, but week 7 the fix was done to address the bug, would it be useful to "redo" NFS and other modules which were fixed halfway through your schedule?  Food for thought.  :)
 * adamwill - a validation test for booting live image with 'xdriver=vesa' to get VESA driver; I realized after release we don't test this and aren't even sure if it's still present / meant to be present

Recommendations
After enough time has been given for feedback, the QA team will discuss and make recommendations on changes to prioritize for Fedora 14. This section organizes and lists the recommendations.

In order to coordinate efforts, and measure effectiveness of recommendations, please record and track any action taken in the Fedora 14 roadmap in the QA TRAC instance.

Release Criteria

 * 1) Create Fedora 14 Criteria Pages -
 * Create new wiki pages for Fedora_13_Final_Release_Criteria, Fedora_13_Beta_Release_Criteria, and Fedora_13_Alpha_Release_Criteria
 * 1) Add firstboot release criteria -
 * Release Criteria - The release criteria do not include what modules are intended for Fedora.  Often, firstboot some modules are missing or disabled, and we don't notice or know whether this is a blocker.  Recommend clarifying the use cases of firstboot in the release criteria.
 * 1) Add dual-boot criteria  -
 * The user experience of dual-boot scenarios was not well understood, as a result did not clearly impact the release criteria.  Recommend reviewing and making adjustments to the release criteria for dual-boot expectations.
 * 1) Add i18n criteria -
 * i18n from installer to login - Recommend reviewing the release criteria to ensure expectations are captured around propagating language and keymap settings from install to desktop login (, etc...).

Release Validation

 * 1) Update existing dual-boot tests and add to test plan -
 * Recommend updating existing dual-boot test cases to reflect criteria, and adding to the install matrix (depends on )
 * 1) Coordination with i18n team on installer test coverage -
 * Lang/Keymap Selection - pointed out the need for a better understanding of how the language and keymap installer selections impact the installed system.  Recommend reviewing methods for incorporating l18n verification into existing test plans, or coordinating with the I18N team.
 * 1) Define install test run baseline -
 * F-13 Beta candidate#3 didn't include the correct version of (see ).  Since it was Beta#3 and not much was supposed to change from Beta#2, some test results from the previous Beta#2 candidate were carried forward.  Thankfully, QA found the problem before release while running the QA:Testcase_Anaconda_autopart_(encrypted)_install test.
 * Establish better guidelines about which test results can be carried forward from one candidate to the next (this isn't easy). Or perhaps, establish a subset of tests that all respins must undergo.
 * 1) Clarify use of Test Priority in test matrix -
 * There was some confusing around the priority listed in the install and desktop test matrices. Recommend clarifying that the priority listed is the test execution priority.  It does not always mean that a failure identified by those tests will block the applicable release.
 * 1) Recommend adding text-mode upgrade test to install matrix -
 * Some tests in the install test matrix involve user interface components in both text-mode and graphical-mode (native or VNC). One such bug  involved a problem found only during text-mode upgrades to Fedora 13.  I'm hesitant to recommend specific text-mode tests for each user interface element that differs in text-mode and graphical-mode (it would be exhaustive but unachievable).  Testing a text-mode install is included in the current matrix.  Is there something small that can be done to cover the gap?  Recommend adding a text-mode upgrade test case to the install matrix.
 * 1) Organize install tests by install use cases/scenarios -
 * The current install test matrix, by design, tests many different choices the user can make during an install. In most cases, the tests are not specific about the order of choices.  There were several bugs in Fedora 13 that involved a specific ordering of the tests (see ).  Recommend investigation and reorganization of the install test matrix to better map to install use cases.  The tests that are important for a particular use case, need to be tested against that use case.
 * 1) Improve test engagement with other desktop communities -
 * Results were often not available for non-GNOME desktop. Recommend looking for opportunities to improve communication between teams and increasing non-GNOME desktop testing leading up to milestones.
 * 1) Improve test instructions -
 * Some install tests are showing there age and do not provide clear and consistent test instructions.
 * 1) Remove duplicate tests -
 * Review and remove tests which are duplicates of existing tests in the matrix (e.g. RAID0, RAID1, RAID5, RAID6, RAID10 are all not needed).
 * 1) Test basic video driver install -
 * There are no tests to confirm expected behavior when booting the live image or installer using the second boot option Install system with basic video driver.
 * 1) Recommend updating QA:Testcase_Anaconda_User_Interface_Graphical, or creating a new test case, that confirms expected behavior when booting Install system with basic video driver.  This test will need to be run against traditional media (CD, DVD and boot.iso) and Live media.
 * 2) Recommend adding a Live image boot option for Boot with basic video driver

Test Days

 * 1) Limit the scope test days  -
 * If another Virtualization test day is intended, recommend focusing on 2 (or fewer) features, or planning for a Virt Test Week.
 * 1) Test week  -
 * If a large amount of testing is needed, too large for just a single day, recommend splitting the event across multiple days. Much like the Xorg-x11-drv graphics events.
 * The Xorg test week was well organized and communicated, perhaps repeating this for Fedora 14 would be ideal.
 * 1) Avoid schedule conflicts and public holidays  -
 * ABRT did not have strong participation, likely due to scheduling the event over the Easter holiday. QA should be mindful of public holidays that might lead to lower attendance for any scheduled test days.
 * Update QA/SOP_Test_Day_management with a caution about public holidays.
 * 1) Choosing good test day topics  -
 * Test Days focused on specific limited availability hardware (iBFT, iSCSI, RAID, multipath) are not well attended by general test community. Unclear if strong user communities (possibly outside Fedora) exist around these features.
 * Recommend updating the test day QA/SOP_Test_Day_management,or create a new wiki page, that provides some guidance on choosing a good test day topic.

Blocker Review

 * 1) Monitor all MODIFIED Blocker bugs   -
 * During Fedora 13, QA did not track MODIFIED bugs during blocker review meetings. On the lead up to the Final release, several MODIFIED blocker bugs were not fixed.  To honor the existing release criteria (All bugs blocking the F13Blocker tracker must be CLOSED), QA needs to also keep track of MODIFIED blocker bugs.
 * 1) QA needs to keep track of MODIFIED Blocker bugs and ensure they are tested and moved to VERIFIED.
 * 2) Improve tracking blocker review status  -
 * In F-13, a bugzilla keyword was used to denote blocker status. However, aside from reviewing bugzilla comments, there was no query-able method to determine whether a blocker request is open, approved or denied.  This led to several Fedora 13 bugs that were 1) fixed in Rawhide, 2) Moved to MODIFIED or CLOSED and not included, but not included in Fedora 13 (e.g.  and ).
 * Not knowing which bugs were already reviewed, also introduced time wasted reviewing previously reviewed bugs during blocker review meetings. Recommend reviewing process changes to avoid the scenarios leading up to  and .  Some options discussed so far include 1) using bugzilla flags (suggested by jkeating) to track blocker requests or 2) hardening the current keyword-based mechanism.
 * 1) Generate exception report and generate action plan for CLOSED Blocker bugs that did not go through VERIFIED.
 * 2) More frequent nag mails leading up to release milestones.  Notification of NEW or ASSIGNED bugs goes to the maintainer, and MODIFIED bugs goes to QA.
 * 3) Blocker nag mails  -
 * During the lead up to F-13-RC, numerous reminder emails were sent to test@ and devel@ mailing lists noting the number and state of remaining blocker bugs. It is felt that, along with developer time of course, the attention to the blocker list contributed to the resolution of 67 blocker bugs from 2010-04-30 to 2010-05-06.
 * 1) Recommend more frequent blocker status emails leading up to each major milestone.  Unclear who should be responsible for sending these mails, QA, rel-eng or program mgmt?
 * 2) Update QA:SOP_Blocker_Bug_Meeting with blocker meeting queries or  commands and where to send the mails (developers bcc'd?)
 * 3) Stretch goal - add email announcements to the Fedora 14 schedule.

Process

 * 1) Propagating schedule slips
 * When F-13-Alpha release was slipped by 1 week, despite previous discussion around the expected course of action, QA agreed to not propagate the slip to the rest of the schedule. This was a mistake and contributed to missing early F-13-Beta milestones and eventually slipping the F-13-Beta release by 1 week.
 * Recommend that any schedule slips carry forward into the schedule.
 * 1) Update BugStatusWorkFlow to include VERIFIED state -
 * Bodhi is responsible for closing bugs now, QA should use the VERIFIED state to note when a bug has been tested and confirmed fixed in an updated package available through bodhi.
 * Recommend use of VERIFIED state to note when a bug has been tested and confirmed fix. This includes updating BugZappers/BugStatusWorkFlow to reflect the use of the VERIFIED bugzilla state.
 * 1) Define and establish proventesters process  -
 * With Critical Path Packages defined and requiring bodhi karma to enter the release, we need to define and build community participation in the process of accepting updates into Fedora.
 * Recommend defining the process for joining, outlining member responsibilities and documenting test instructions or guidelines. We need to invite more participants to qualifying Critical Path Packages
 * 1) Reward key testers -
 * During Fedora 13, pfrields was able to secure funding to reward several key QA contributors.
 * 1) Recommend requesting and securing QA budget to reward key contributors
 * 2) Recommend researching reward options (maxamillion has discussed t-shirt ideas with the design team)

Test Automation

 * 1) Automate package update acceptance - See package update, resultdb, virtualization and depcheck milestones.
 * Complete tasks needed to automate the QA:Package_Update_Acceptance_Test_Plan.
 * See TRAC milestones depcheck, package update tests, package sanity and resultsdb.
 * 1) Automate basic installation tests - See autoqa install roadmap
 * Continuing install automation work lead by Liam
 * Goal to have mediakit tests scripted and a subset of automated installs. See milestone.
 * 1) Integrate automated storage testing -
 * Incorporate the automated storage test tool developed by clumens into AutoQA.

Communication

 * 1) Increase casual tester engagement -
 * Look for opportunities to better engage casual tester in test runs (desktop and install). We have a lot of testers reporting feedback on the mailing list, is there a way to better incorporate the feedback into wiki test runs/results?
 * 1) Earlier test day announcements -
 * Recommend that Test Days be announced (planet, test-announce, forums etc..) earlier. While it's difficult to announce a test day when the test day wiki content isn't available, we need to announce the events much earlier.
 * 1) How to debug -
 * We have a lot of smart people that find and debug problems on a regular basis. I recommend improving the Category:Debugging content by creating a SOP for debugging a problem (recommended by User:wwoods).  This SOP would guide readers through the common steps of isolating a problem.

Infrastructure

 * 1) Always available live images -
 * In Fedora 13 (and 12), we found live image related issues too late. One contributing factor was that due to packaging bugs, live images were not available for test for long periods of time.  Having a live images available for test as much as possible is important.  Kevin currently maintains this process.  Recommend working with Kevin to look for ways to ensure that there is always a live image available for test (aka last known built).

Release Engineering

 * 1) Delta-ISOs -
 * QA contributor Andre Robatino builds, provides and the process of creating Delta ISO images for all Fedora test milestones. These images are used by several key QA contributors who have low bandwidth, including rhe, liam, kparal, robatino.
 * Recommend moving delta-ISO generation into the compose process, or giving Andre access to internal systems to create delta-ISO images.
 * 1) Creating ISO TRAC tickets earlier -
 * Using TRAC milestones was extremely helpful to keep on top of ISO availability for each milestone. Unfortunately for F-13-Beta and F-13-RC, the tickets were not created until on or after the date of the deliverable.
 * Recommend creating the TRAC tickets for a milestone much earlier in the release.
 * 1) Keeping ISO TRAC tickets up-to-date -
 * Tickets were not always kept updated with current status
 * Is there anything QA can do to improve communication here?
 * 1) Research new rel-eng test -
 * Investigate test opportunity with release engineering to validate that the content included on a mediakit (DVD, CD, live image) is expected (see )