Data Analyses & Report
Topics on This Page
At the end of the usability test you will have several types of data (depending on the metrics you collected). You should have quantitative data: success rates, task time, error rates, and satisfaction questionnaire ratings. Your qualitative data might include: observations about pathways participants took, problems experienced, comments, and answers to open-ended questions.
Enter the data in a spreadsheet to perform calculations, such as:
- percentage of participants who succeeded or not at each task
- average time to complete tasks
- frequency of specific problems
You may want to add participant’s demographic data so that you can sort by demographics to see if any of the data differ by the demographic variables. Make sure you identify the task scenarios for each of the metrics.
Read through the notes carefully looking for patterns. Add a description of each of the problems. Looks for trends and keep a count of problems that occurred across participants. Make sure your problem statements are exact and concise. For example:
Good problem statement: Clicked on link to Research instead of Clinical Trials.
Poor problem statement: Clicked on wrong link.
Poor problem statement: Was confused about links.
To ensure you report the important results, as you are reviewing the data consider how global the problem is throughout the site and how severe (or serious) the problem is.
Your findings may have implications for other pages in the site (global). For example, you may find that participants could not find what they needed on the page because of text density. You could say that just that page needed to be fixed but you should also consider how many other pages are equally dense with text.
Some problems contribute more to participants being unable to complete the scenarios than others. Many groups note the severity of the problems on a three- or four-point scale. For example:
- Critical: If we don't fix this, users will not be able to complete the scenario.
- Serious: Many users will be frustrated if we don't fix this; they may give up.
- Minor: Users are annoyed, but this does not keep them from completing the scenario.
A good report should present the just enough detail so that the method can be repeated in subsequent tests. Keep the sections short and use lots of tables to display the metrics. Focus on the finding and recommendations and use visual examples to demonstrate problem areas.
Include a brief summary including what you tested (Web site or web application), where and when the test was held, equipment information, what you did during the test (include all testing materials as an appendix), the testing team, and a brief description of the problems.
Include the test methodology so that others can recreate the test. Explain how you conducted the test by describing the test sessions, the type of interface tested, metrics collected, and an overview of task scenarios.
Describe the participants and provide summary tables of the background/demographic questionnaire responses (e.g., age, professions, internet usage, site visited, etc.). Provide brief summaries of the demographic data.
Describe what the facilitator and data loggers recorded. Depending on the metrics you collected you may want to show:
- the number and percent of participants who completed each scenario, and all scenarios (a bar chart often works well for this)
- the average time taken to complete each scenario for those who completed the scenario
- the satisfaction results
Describe the tasks that had the highest and lowest completion rates. Provide a summary of the successful task completion rates by participant, task, and average success rate by task and show the data in a table. Follow the same model for all metrics.
Findings and Recommendations
List your findings and recommendations using all your data (quantitative and qualitative, notes and spreadsheets). Each finding should have a basis in data—in what you actually saw and heard.
You may want to have just one overall list of findings and recommendations or you may want to have findings and recommendations scenario by scenario, or you may want to have both: a list of major findings and recommendations that cut across scenarios as well as a scenario-by-scenario report.
Report Positive Findings
Although most usability test reports focus on problems, it is also useful to report positive findings. What is working well must be maintained through further development. An entirely negative report can be disheartening; it helps the team to know when there is a lot about the Web site that is going well.
Link Findings and Recommendations
Each finding should include as specific a statement of the situation as possible. Each finding (or group of related findings) should include recommendations on what to do.
Participants were unwilling to read a dense page of text
Finding: 9 of 10 participants who successfully got to the page that had the information they were looking for in this scenario expressed dismay at how much text there was on the page. They said that it was too much to read. (Show a small picture of the page). When asked what they would do to get the answer to the question in the scenario, five of nine said they would guess the answer; four of nine said they would try to find a person to call or would ask someone they knew.
Recommendations: Break up the information on the page into a series of short questions and answers. Even when using bulleted lists (as there are on the page in this scenario), put space between each bulleted item if the items are longer than a few words. Also have only a few bullets in each list (not 20 as in the list on the page in the scenario).
Provide a Severity Rating
If you marked problems in your analysis as local/global and with a severity level, report those.
Include screen shots and video clips. You can make the report both more informative and more interesting by including visuals. Include screen shots so readers can visualize what you were testing. Include parts of screens to illustrate specific areas that are working particularly well or that are causing problems for users.
If you are presenting the report electronically and the readers of the report have the technology available to see video clips, include a few short clips to illustrate specific points. People who did not observe the actual test sessions are often most convinced of problems and the need to fix them by watching and listening to relevant video clips.
Implement and Retest
For a usability test to have any value, you must use what you learn to improve the site. You may not be able to implement all the recommendations. Developing any product is a series of trade-offs in which you balance schedule, budget, people's availability, and the changes that are needed.
If you cannot implement all the recommendations, develop priorities based on fixing the most global and serious problems. As you prioritize, push to get the changes that users need. The cost of supporting users of a poorly-designed site is much greater than the cost of fixing the site while it is still being developed.