Lrileywoods/Usability Testing

Why Usability Testing?
Compelling intro paragraph here.

Usability Testing Terms

 * Participant - the person taking the usability test
 * Tester - the person running the usability test
 * Thinking Aloud - the participant talks through what s/he is doing for the benefit of the recording
 * Active Intervention - the tester asks questions, such as "What would you do next?" and "What do you think about the menu structure here?"
 * Remote Testing  - usability testing done over a long-distance, where the tester and the participant are not in the same room, but communicate via some audio device
 * Test Scenario - a written script of what task the participant is to perform during the usability test, should encompass the intended use of the application

What you need to do Usability Testing

 * A tester (to run the test)
 * Two Copies of the Consent Form (one for tester, one for participant)
 * this might be covered by the CLA - check with legal?
 * Signup Form
 * A written Test Scenario
 * Paper/laptop for Notes
 * A computer (with the application/website you're testing)
 * A participant
 * A videocamera or screen/audio capture program
 * A room where you won't be interrupted during the test

Basic Order

 * 1) Choose Application/Website: Choose an application/website you want to test
 * 2) Intended Use: Determine the intended use of this application/website
 * 3) Intended Audience: Determine the intended audience for the application (general user, power user, sys admin, other)
 * 4) Create a Test Scenario: Create a test scenario for the participant to follow
 * 5) Recruit Participants: Five is a good number.  You can strong-arm friends and family, recruit online, or just get some co-workers on a slow afternoon
 * 6) Run the Usability Test: Yes, do that.

EXAMPLE:
 * 1) Application: Software Updater
 * 2) Intended Use: Updating the system, which involves downloading and installing new packages
 * 3) Intended Audience: This applies to all users, so general users are the participant choice
 * 4) Test Scenario: Have participant login to a system (for the first time) and there will be a notification about updates, have the participant then install the updates
 * 5) Recruit Participants: Grab some people at your office, or get some people to volunteer online
 * 6) Run the Usability Test: Run the tests, and watch the videos with your colleagues

Documents to Create

 * Waiver for participants - an outline for a waiver
 * Guidelines for participants (person performing the test)
 * Example test results (a video, some "conclusions" from the test video)
 * http://betterdesktop.org/ has plenty of example data.

Things to test
Ideal things to test:
 * File Browser
 * Software Updater

Authors/maintainers of applications may sign up to volunteer their application to be tested - they will need to include this information on the application:
 * Intended use
 * Intended Audience (general users, power users, sys admins)
 * (Optional) can also propose test scenarios

Avoid testing things that are not going to change or we are not capable of changing.

Types of Participants
In order to consolidate our efforts, each participant will self-apply one of these terms in order for testers to correctly test the intended audience for an application.
 * General User: Daily to semi-daily computer user, may dabble in writing code
 * Power User: Constant computer user, has written code
 * Sysadmin: A Systems Administrator

Participant Signup Form Info

 * What kind of overall computer user are you? (general user, power user, sysadmin)
 * Primary OS use (Mac, Linux, KDE, Windows, etc.)
 * Age
 * Gender
 * Experience with Linux (first-time user, casual, experienced)
 * Experience with Fedora (first-time user, casual, experienced, dev team)

If not testing at FUDCon, then get participation volunteers, including this information. It's easier if there is a paper form, as well as an online form for participant signups.

Writing a Test Scenario
Write a scenario for the application that tests its intended use. For example, let's say we're testing the File Browser. The tester would tell the participant to perform a task that would take them through the File Browser.

Estimate the amount of time for the task - this is important in order to give the participant some idea of how long they'll be testing. Also, if the task is very short, testing it can be combined with other short tasks, in order to get good mileage out of one participant.

ALSO NOTE: After watching your first usability test with this scenario, you may need to adjust your time estimate for the completion of the tasks. This is normal.

Example scenario for the File Browser: On this computer, there is a folder titled "Cute Animals" in the users Pictures folder. Go to this folder, and pick your favorite cute animal picture, then move it  to the desktop.

Notes about scenario tasks
Generally, a user will be more involved in a task if they get a choice or options to choose from. Of course, for some tasks, this isn't possible, but it's fun to try.

Recording the Usability Testing
Most usability tests need to be recorded so the tester can show the test to other designers, or review the test later to make sure their information is sound. What you want to record during a test is the screen/monitor and the participant and tester's voices.

Different Methods of Recording Usability Test

 * Videocamera
 * Screen Capture & Audio (e.g. with )

During a test

 * Make sure the participant signs the consent form
 * Reassure the participant that this is in NO WAY a test of their skills or knowledge, and it is a test of the application itself.
 * This should also be the mindset of the persons evaluating the usability tests.
 * Encourage the participant to "think aloud" throughout the test.
 * Sometimes this can be a bit hard, as it's not something people are used to doing.
 * This means encouraging them to tell you:
 * What they're planning to do
 * Why they make the decisions they make
 * What they expect to happen when they make a choice
 * You CAN NOT point out where things are or what is happening to the participant - these things should be made clear by the application, and if they aren't, then confusion will be the result of the lone user experience (which is what we're attempting to emulate).
 * If a participant is not actually "thinking aloud" very much, ask questions that help get a peek into what they're thinking - for example, if a user hits a button and then says "oh!" or "What?" then ask them what they expected to happen versus what happened. A tester should note a participants' confusion or surprise and the reasons for it.
 * Avoid asking the participant leading questions or giving them indication that they are way off course. A leading question can skew the participant's actions.  Try to be more subjective and ask things like "Which of these would you click/select and why?"
 * There are times when things have gone so far afield, that you may have to start over. Again, this is not the participant's fault, or yours, or even necessarily the application's.  After a usability test has gone awry, usually you can make adjustments to the test scenario to compensate for some of the problem.

After a test

 * Thank the tester
 * Summarize the participants experience in notes