Preparation & Testing
Topics on This Page
To create good task scenarios, you will need to have identified the top tasks users try to completed when visiting your site. You may also identify scenarios that test items that you think may be usability issues to see if they really are. Good scenarios have a goal, what is the user to do, what question are they trying to find the answer to, or what data do they need to complete the task.
Keep the scenarios short so that participants don’t need to read a lot and scenarios are easy to understand. Do not include information about how to complete the task. You will need to create about ten scenarios for a typical one-hour test.
In which government building can you find Bertrand Adams' 1937 painting Early Settlers of Dubuque?
Your grandfather told you that he posed for Bertrand Adams when he was painting his large 1937 masterpiece, Early Settlers of Dubuque. You heard that the painting is displayed in a Federal building. In which building can this artwork be found?
Record Completion Paths
You will need to record the task scenario completion paths. Having the completion paths will help the observers and note-takers know what to expect and how to complete the task. Participants should not see the paths.
Always try your scenarios out in a pilot test. If you find that your ‘trial’ participants do not understand a scenario then you will need to rewrite it. After you rewrite the scenario, run a second pilot test.
You will want to identify the types of participants who are similar to your site users.
You may had different potential users groups (e.g., physicians, patients, researchers). Try to include representatives of all these groups. A common mistake is to use your internal staff as participants when your site is meant for an external audience.
For diagnostic usability testing, six to eight users are usually enough to uncover the major problems in a product. If you want to conduct formal quantitative testing on your products or systems, you'll need more people to derive statistical results. If you do iterative (repeated) usability testing over the course of developing the Web site, many users will participate in testing one or another version of the emerging site.
If the team has access to representative users you can recruit from those individuals.
If the team does not have access to representative users, you will have to hire a commercial recruiting company. Most recruiting companies require two to three weeks to find the necessary number and types of participants.
For templates you can use, see Usability Test Screeners. The questions are examples, taken from government Web site screeners. You may want other questions or a different mix of participants for a usability test of your site.
Recruitment costs include the cost associated to finding participants, incentives to get them to come (e.g., gifts or money), and in some cases travel/parking expenses.
Make sure you have everything prepared and checked prior to the test sessions. If you are concerned than do a dry run by checking the equipment and materials or a pilot test with a volunteer participant. The pilot test allows you:
- to test the equipment and provides practice for the facilitator and note-takers
- to get a good sense whether your questions and scenarios are clear to the participant
Run the pilot test a few days prior to the first test session so that you have time to change the scenarios or other materials if necessary.
The facilitator will welcome the participant and invite the participant to sit in front of the computer where they will be working. The facilitator explains the test session, asks the participant to sign the video release form, and asks the profile (demographic) questions. The facilitator explains thinking aloud and asks if the participant has any additional questions. The facilitator explains where to start.
The participant reads the task scenario and begins working on the scenario while they think aloud. The note-takers take notes of the participant’s behaviors, comments, errors and completion (success or failure).
The session continues until all task scenarios are completed or time allotted has elapsed. The facilitator asks the end-of session subjective questions, thanks the participant, gives the participant the agreed-on incentive, and escorts them from the testing environment.
Tips for Good Test Facilitation
- Treat participants with respect and make them feel comfortable.
- Remain neutral – you are there to listen and watch. If the participant asks a question, reply with “What do you think?” or “I am interested in what you would do.”
- Do not jump in and help participants immediately and do not lead the participant. If the participant gives up and asks for help, you must decide whether to end the scenario, give a hint, or give more substantial help.
- The team should decide how much of a hint you will give and how long you will allow the participants to work on a scenario when they are clearly gong down an unproductive path.
- Take good notes. Note-takers should capture what the participant did in as much detail as possible.
- Note-takers should capture what participants say in the participant’s words.
- The better notes taken during the session, the easier the analysis will be.
There are several metrics that you will want to collect and that you identified in the usability test plan.
Successful Task Completion
Each scenario requires the participant to obtain specific data that would be used in a typical task. The scenario is successfully completed when the participant indicates they have found the answer or completed the task goal.
In some cases, you may want give participants multiple-choice questions. Remember to include the questions and answers in the test plan and provide them to note-takers and observers.
Critical & Non-Critical Errors
Critical errors are deviations at completion from the targets of the scenario. For example, reporting the wrong data value due to the participant’s workflow. Essentially the participant won’t be able to finish the task. Participant may or may not be aware that the task goal is incorrect or incomplete.
Non-critical errors are errors that are recovered by the participant and do not result in the participant’s ability to successfully complete the task. These errors result in the task being completed less efficiently. For example, exploratory behavior such as opening the wrong navigation menu item or using a control incorrectly are non-critical errors.
Error-free rate is the percentage of test participants who complete the task without any errors (critical or non-critical errors).
Time On Task
The amount of time it takes the participant to complete the task.
These evaluations are self-reported participant ratings for satisfaction, ease of use, ease of finding information, etc where participants rate the measure on a 5 to 7-point Likert scale.
Likes, Dislikes and Recommendations
Participants provide what they likes most about the site, what they liked least about the site, and recommendations for improving the site.