From Fedora Project Wiki

Introduction

This page is intended to gather feedback from the Fedora QA community on things that worked well and things that could have been better with the testing of Fedora 13. The feedback will be used as a basis for identifying areas for improvement for Fedora 14 testing. Any thoughts, big or small, are valuable. If someone already provided feedback similar to what you'd like to add, don't worry ... add your thoughts regardless.

For any questions or concerns, send mail to test@lists.fedoraproject.org.

Providing feedback

Adding feedback is fairly straight forward. If you already have a Fedora account ...

  1. Login to the wiki
  2. Select [Edit] for the appropriate section below.
  3. Add your feedback using the format:
    * ~~~ - I like ____ about the new ____ process
  4. When done, Submit your changes

Otherwise, if you do not have a Fedora account, follow the instructions below ...

  1. Select the appropriate page for your feedback...
  2. Add your feedback using the format:
    * ~~~ - I like ____ about the new ____ process
  3. When done, Submit your changes

Feedback

Things that went well

  • 192.168.1.63 - I like the higher default 1024x768 screen resolution provided for generic display upon installation.
  • jlaska - Release Criteria - Having release criteria that have been reviewed and accepted by key stake holders really expedites testing by improving bug escalation time and uncertainty regarding user impact. Instead of spending precious time debating the merits of each bug report, we can collectively debate the criteria ... and adjust as needed.
  • rhe - Trac Ticket for creating milestones - efficiently track the status of isos.
  • rhe - key of test results - more people can easily provide test results in one test case.
  • rhe - F-13-Beta test runs - highlighted test cases that needed focus in the test plan using bold text
  • rhe - F-13-Beta test runs - Use of #REDIRECT pages (e.g. Test_Results:Current_Installation_Test and Test_Results:Current_Desktop_Test) and Category links instead of real ones (e.g. Beta TC results and Beta RC results).
  • jlaska - Nightly live images - Having them is extremely helpful! QA relies on them during test days, for bug verification and for milestone testing when official media is unavailable.
  • jlaska - Scheduled install acceptance test runs - The F-13 schedule included several install acceptance test runs prior to each milestones test compose. Prior to having the Branched install images available (pre-freeze), these were very helpful to identify Alpha install blockers earlier than the 'test compose'. I've added this to the wishlist section as well, since there is still room to improve this process. See results Pre-Alpha#1, Pre-Alpha#2, Pre-Alpha#3, Pre-Beta#1, Pre-Final#1
  • John Poelstra -- Heroic resolution of 67 blocker bugs between 2010-04-30 and 2010-05-06. I believe part of this success was due to the constant messaging and updates as to where we were at and the contingency plan that would have to be enacted if we were not successful.
  • jlaska - Community participation - I Don't have numbers in front of me, but the installation test events for F13 were way up, especially for the Final TC and RC phase.
  • rhe - Announcement in time - Since the builds were posted in a random time as long as it's available, it's better to have more than one person to cover 24 hours a day so that the announcement could be sent out in time.
  • jlaska - Test planning - I really liked having multiple test matrices to track against (both desktop and installation). Also, using milestone specific categories (like Category:Fedora_13_Final_RC_Test_Results and Category:Fedora_13_Final_TC_Test_Results) really helped organize the content, and having direct links to the current test matrix was awesome (as stated above).
  • dramsey - The Features Test Cases and Test Results Matrices - Use of effective pages (e.g. from Test_Day:2010-02-04_NFS to Test_Day:2010-04-01_ABRT .) I went through eight of your fifteen test days. Lots of fun. Looking back in time, I see there were about five open slots. Probably a feature was not ready for review, but consider an open slot is a lost opportunity as well as lost momentum to provide energy to keep the wheel of progress moving forward. Respectfully, keeping the Test Day enjoyable as well as achievable and easy to follow were key for me. Another consideration for the future based upon this would be to touch base with the people "who would be doing the testing" and consider integration of their idea(s) for your competitive advantage, for example more i18n and dual boot considerations as well as building upon previous test cases as a sort of encompassing approach. There may be merit with those ideas indeed.

Could have been better

  • Parijath - DNS not resolving - In my default installation, firefox can not resolve urls. When I give the ip address, it is working well. Otherwise it doesn't work. 'ping -c 3 google.com' is working again. I thought there must be some problem with SELinux or Firewall. As I am a new comer, I have disabled iptables and ip6tables and also SELinux security. Then rebooted. Even then I couldn't connect to any url from firefox. I have tried to enable all the services which relate to dns and networking. What should I do ? I opened a terminal, 'su', then typed 'firefox' ( as root) then an error message has come. It is given below ... Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)
  • Prarijath - Package Update problems - Next problem is ... the system has shown about 12 updates. When I selected update, it is eternal.
    • jlaska 17:36, 27 April 2010 (UTC) - Greetings Parijath. Sorry to hear of your troubles. Your best bet is to join the Fedora QA test mailing list and start communicating each issue you are experiencing. Try to keep your emails specific to one topic per mail, and just as you have done here, include any diagnostics you have performed to solve the problem.
  • jlaska - freeze date between TC and RC - Because the alpha development freeze takes places after the test compose, the first alpha release candidate had a lot of change and was dead on arrival (see thread discussing changes).
  • jlaska - live images - Not having daily Live images available for test left several Live installer bugs lurking until late.
  • jlaska - firstboot modules - We don't have a way to know what firstboot modules are expected, we need to document this as release criteria, or write a test to report the issue (for details, see RHBZ #574596).
  • jlaska - test days - Printing test day did not have as many participants as we anticipated and hoped. How come?
  • rhe - schedule - some candidates were not available as expected to be available on time. Trac tickets were created by releng every time before the events, but it is not easy to tell the status of candidates from the tickets. Is there anything else to do to make sure it's uploaded as scheduled next time?
  • Kparal - test days - ABRT test day did not have large attendance. Maybe we shouldn't organize test days around Easter holidays?
  • jlaska - Beta RC3 testing - Beta RC3 didn't include the correct version of Package-x-generic-16.pngplymouth. Thankfully, the test matrix caught the problem (see RHBZ #578633), but what if we didn't run that test? That's an easy test to skip. This implies that any changes to plymouth may impact encrypted partition passphrase entry.
  • pfrields - Schedule and QA - One-week slip for Alpha was supposed to echo down through the schedule, as documented on the Fedora 13 Alpha Release Criteria page. At least one QA team member indicated the shorter time frame contributed to inability to find and resolve Beta blockers.
  • Kparal - network disabled by default - As per bug 572489 the network interfaces will be disabled by default in F13 in most cases (installations from CD/DVD/USB). That's a very unfortunate default setting, especially for Fedora newcomers. We should update our Fedora Release Criteria to ensure we can mark this problem as release blocker (probably Alpha) next time. Automatically working internet connection is so basic assumption for most users that we shouldn't ship Fedora doing the opposite.
  • rhe - test plan - Though some test cases are marked as Beta priority, they still didn't block beta release, such as repository cases. (adam nb: this is because the bugs exposed caused the test to 'fail', but don't really break the underlying release criteria. I think we could perhaps track this type of 'failure' specifically in the results tables).
  • rhe - Virt test day - Too many test cases and features for one event, perhaps would be better as a test week or a test day with smaller focus?
  • dramsey - As noted above, Virt test day was not to my advantage. In fact, I was like thinking, there is like no way I would be able to accomplish this during my day off. It turned me "off" like a lightbulb, even though I love virtualization. Isn't it ironic? I would ask that a test week be considered in order to accomplish all the test cases or scale to fit the single test day structure.
  • liam - Test Day - Some test cases need to be improved, none of these cases were executed in Test Day. https://fedoraproject.org/wiki/Test_Day:2010-04-08_Virtualization_VirtioSerial, I have filed a ticket to tract it(https://fedorahosted.org/fedora-qa/ticket/61 ), but did not find people to do this.Test cases seem not be written by Amit.
  • liam - Install testing - if we can use USB flash disk instead of Burning CD/DVD/liveCD, will involve more people in install testing. We shall ask the Relengineer to build media to support USB boot and install. During each round of testing, we have to burn 2 DVD(i386 and x86_64),4 CD(cd1/cd2),2 liveCD. If we can USB to test install, the booting from CDROM testing can be done by virtual machine. We will stop burning disk.
  • jlaska - milestone tracking - Several people in QA act on the rel-eng deliverables associated with test milestones (Alpha, Beta, RC). The tickets to track these events were often created on the day of the event, or after. Having these tickets created for all milestones at the start of development would be helpful.
  • adamwill - Blocker request? - During Fedora 13, there was no clear way to note that a blocker request had been reviewed and approved. Several installer bugs slipped through the cracks and were waiting for some confirmation that they had been approved blockers (e.g. RHBZ #577803). Currently, we only "approve" blockers by way of a updated comment after blocker review.
  • jkeating - Bodhi autocloses bugs - When bodhi manages the bugs, it will close things as soon as they hit stable, regardless if they fix the issue or not. We'll likely need some separate way to make sure we get verification that the bugs are indeed fixed with the update.
  • rhe - Test day - The feature which requires special or high-end devices, like raid, iscsi, multipath storage is not suitable to be held as a test day, since most community members don't have such devices.
  • John Poelstra -- Hound blocker bug owners earlier and more aggressively so that the end does not have to be such a mad dash.
  • AdamW -- we didn't seem to have any kind of formal freeze close to release time, even for critpath. We may have been rejecting non-blocker critpath updates, but I don't remember such being put on the schedule or announced. I think it's fine to leave non-critpath, but we should have a formal freeze for critpath around the time the TC is done, I think.
    • John Poelstra -- some of this was due to the removal of the Alpha and Beta Freeze milestones which were deemed to no longer apply because of No Frozen Rawhide. I would advocate adding them back.
  • jlaska - Install test plan TUI/GUI - The test plan details some tests that must be performed in text-mode and graphical, but there are some cases that need to be tested in both. We can either recognize that there are these gaps, or add a lot more test cases to explicitly call out the cases that need both GUI and TUI verification (see RHBZ #590823).
  • AdamW - better messaging around deadlines for community testers; we got a lot of testing on RC2 that's really hard to use as we have to slip for _any_ fix, we should have communicated better that this info really needs to come from TC1/RC1 testing
    • John Poelstra - I would like to work with QA to add as much additional detail as they would like to the weekly schedule emails. Might also be good to include the specific test days as well (jlaska suggested this to me before, but I didn't think it was that valuable at the time--now I see he was right).
  • jlaska - Dual-boot Install Expectations - Due to RHBZ #590661, the dual-boot experience was not well understood and tested for the final release. It was discovered late and unclear whether this behavior was critical to Fedora success.
  • jlaska - Install test plan - Our current test matrix tests that boot.iso, CD and DVD booting works, and that CD, DVD, HTTP, NFS, HDISO installation sources work. The test plan however doesn't specify all permutations of boot method (CD,DVD,boot.iso,pxeboot) + package repository (CD,DVD,URL,NFS). RHBZ #590640 was found by testing boot.iso and repo=nfs:server:path. That test isn't explicitly called out in our test matrix. The current NFS test isn't specific about how you boot the installer, it assumes PXEboot. Perhaps we should fill this gap in the test matrix, or stop supporting all these installation methods.
  • rhe - Desktop Matrix - XFCE and LXDE are often blank without testing. Maybe need encourage testing on these? Or combine them together? Or just take them out from the matrix?
  • rhe - Local language install - Currently we don't have i18n installation cases, so local language install is not tested as a demand. Untranslated pages are still existing after RC test. Such cases needed be added in test as well as release criteria.
  • rhe - Install Test Cases - Some cases are not essential, and the steps of some other cases are improper any more. A review of all test cases are needed.
  • rhe - Install Test Cases - Quite a number of testers would install without using disc media. Should we consider adding cases such as Install Source USB Drive or Grub install?

Wishlist

  • Kparal - care about low-bandwidth testers - we should improve QA processes' accessibility even for low-bandwidth users. That means primarily use existing tools for lowering download requirements of individual release milestones: deltaiso, zsync (2).
  • Kparal - test days calendar - we could create and maintain a web calendar containing all Test Days (maybe even other QA activities?), that people could add to their calendar program and be notified when a new event is happening (not everybody follows the announcements in MLs and also they are announced few days in advance so people may easily forget about it -- my case). Just another way to achieve a little higher participation.
  • Pfrields - Reward testers -- we should reward repeat (or frequent) community QA/testers with a 4 or 8 GB USB key and maybe another gift. I have a small pot of money we can use for this, if Fedora QA team folks are not able to just do things like this. (I would advocate that Jlaska have a small pot of money for this, amount TBD.)
  • Mcepl - boot.fedoraproject.org -- it could be helpful for everybody present to deploy test-day images to http://boot.fedoraproject.org; not sure about bandwidth requirements for that, but it shouldn't be worse than everybody downloading whole images.
  • rhe - community involvement - Would love to have someone outside the core team help announce or host a planned test run (Desktop, Installation or other).
  • rhe - care about low-bandwidth testers - Delta-ISO's made officially available.
  • Pfrields - test day incentive - One idea, if you have "repeat testers" we might want to reward them with a 4-8 GB USB key and maybe another gift?
  • Kparal - easily available links to current QA activities - I would like to see floating frame in some of our important wiki pages (like QA) containing list of all current activities. That means that the list could contain link to current installation test, current (or very near) test day, today's meeting, etc. Just all the stuff where people can get involved. The list of activities would have to be probably manually updated, but I think it should be worth it. I got this idea when I was in LiveCD environment, I opened QA wiki page, and it was nearly impossible to find a link to current installation testing. When it was impossible for me, what about the public?
  • Kparal - participate in Summer Coding - I have just found out about event Summer Coding 2010. It would be great if we could create a few ideas for summer student projects (for example related to AutoQA, but maybe also other activities) and let the students work with us to finish the task. We gain more manpower and the possibility that the student will stay in QA even after the task is done.
  • Pcfe - test day timing - Give ~one week advance notice for test days
  • Pcfe - test day reasoning - Record reasons for reboots between tests. A lot of the Xorg-x11-drv tests require reboots, in many cases using a liveCD can lead to a really long boot time. Perhaps each case that requires reboot could more clearly explain why boot is needed (fresh module set, or enabling or disabling KMS etc..).
  • Pcfe - test day page groupings - Pcfe didn't like having to click multiple pages, would be nice to have a single printable page with all test case instructions. Stickster points out, this can be accomplished by making a page that transcludes all the individual test instructions for a day in one place.
  • Pcfe - make feedback easier - currently we track test results in a wiki table, while it looks nice it is annoying and error-prone to edit.
  • jlaska - installation testing - I'd like to propose removing all the RAID tests and replacing them with 1 or 2 general RAID tests. We aren't hitting any problems anymore that are specific to RAID0 but work on RAID5. The main focus for this tests is to make sure that we identify a real-world RAID install partitioning setup, anaconda can execute that partition scheme, and dracut can boot it.
  • jlaska - i18n installation testing - What does the Fedora i18n team test, and what don't they test? Can we better coordinate with them?
  • jlaska - Test Day Help - identify and encourage a group of participants to act as test day specialists (perhaps with office hours). They can sign up and be available during times at different test events. They understand the basics of Fedora and know where to go to find documentation. They would be the first level of triage for test day problems.
  • jlaska - Blocker Reviews - These are too time sensitive for the team. Can we somehow improve this to make it scale better? Perhaps improved guidelines for having the bug assignee and reporter negotiate bug blocker escalation. The QA+RelEng team would only review issues where things are unclear. Spending 4 hours reviewing blocker bugs on IRC doesn't seem like the best use of time, is it?
  • jlaska - Last known good - Prior to the Alpha and the Branched compose availability, we built custom composes and ran them through the QA:Rawhide_Acceptance_Test_Plan to determine whether the compose was good enough for general use. We maintained a symlink pointing to the last known good install source. Once the Branched install images were available, this process no longer made sense. We need to figure out how to make last known good meaningful for the installer. This includes figuring out how to get new anaconda packages built, submitted to bodhi, when does the compose happen, and where does it pull content from (updates or updates-testing?). Do the automated install tests run and provide positive bodhi karma for anaconda?
  • robatino - Installation test improvement - Some install tests are subsets of others - for example, if I do a default graphical install from the DVD, then I can do BootMethodsDvd, PackageSetsDefaultPackageInstall, Anaconda autopart install, and Anaconda User Interface Graphical simultaneously. If using a virtual guest, I can also throw in Anaconda partitioning uninitialized disks. Some way of grouping results to demonstrate this relationship could help testers.
    • Some new tests could be thrown in that could be subsets of existing tests so they wouldn't make the testing take any longer. For example, when entering either the root or firstboot password, a weak password could be tried first, to make sure the warning is generated, then a strong one. The time-consuming part of these tests is waiting for a large number of packages to be installed.
    • Testcase Mediakit Repoclosure and Testcase Mediakit FileConflicts should be done together since the mount part of the instructions is exactly the same.
    • Some tests such as Testcase Anaconda rescue mode and the save traceback tests are very quick since they don't involve actually installing. The rescue mode test requires an existing install, the others don't.
  • wwoods - Better bug reporting - We need to better advertise how to self-diagnose problems (similar to Category:Debugging), instead of filing a bug first, discuss the problem on the mailing list. Do we have the right information listed to guide problem analysis? First line of defense should be the test@ mailing list to collaboratively solve problems.
  • jlaska - Internationalization - We keep having i18n issues (See RHBZ #571900). It's not clear how a language and keymap setting are supposed to propagate throughout the OS after install. For example, should that value be used by dracut for passphrase entry? GDM for login and password entry? X etc...? It would be great for folks to sit down and clear up expectations.
  • dramsey - Internationalization and all in one encompassing system testing - +1 for the previous mention about i18n, I am so there! For the second part of my wish list idea, would a consideration of available virtual machines for an all in one encompassing system testing be useful? My thought is something like this, via whatever hosting system like Linode or in-house have accessible virtual machines which could be used to test upon. The thought goes like this, as the tester tests, with an error that is captured, another person may view the content via their own browser. Sort of kills two birds with one stone, system that is.  :) Depends on if you are interested in receiving feedback relevant to an overall system. Also, consider a test day / test week during mid-way in your schedule to do pseudo encompassing system test set via iso's. Sort of a see if what was fixed half way consideration. For example, some bug #12345 broke in NFS on week 1, but week 7 the fix was done to address the bug, would it be useful to "redo" NFS and other modules which were fixed halfway through your schedule? Food for thought.  :)

Recommendations

After enough time has been given for feedback, the QA team will discuss and make recommendations on changes to prioritize for Fedora 14. This section organizes and lists the recommendations.

References