Usability Testing is a UX Research methodology that is commonly used to investigate whether a particular product or service is easy to use and understand. Participants are recruited and asked to perform tasks using some software, application, website or other solution that can be digital or physical. The observed results allow researchers to learn if people can successfully complete a specific task, if the solution meets their needs and expectations, and identify the main problems and opportunities for improvement.
The tests can be quantitative, when focused on statistics and numbers, or they can be qualitative, when the focus is on analyzing user behavior and understanding how and why some actions happen. In both cases, it is possible to rely on automated tools or the in-depth work of a UX Design team. The choice of methodology varies according to the objective and questions that are intended to be answered through the tests. Here are some common questions:
- “Do most people use the menu or the search bar on the homepage?”
- “Which CTA button allows a better conversion on that page?”
- “Search filters have been redesigned, are they easy to use?”
- “How can we increase the number of registrations on this page?”
The last two questions are good examples where qualitative usability tests can be very useful, and it is no coincidence that this methodology is the most applied to generate business insights. But how to analyze the results of qualitative tests? What should be done with the observed findings to generate valuable insights and positive impact on business strategy?
Metrics in Qualitative Usability Testing
In qualitative tests, metrics can help a lot in measuring the participant experience and making decisions about which improvements to prioritize. There are several metrics that can be considered in this analysis, such as task completion success rate, task time, and perceived ease of use.
The most essential thing is to choose metrics that are aligned with the purpose of the test and the strategies of the product or service in question, avoiding measuring what is not relevant to the business. Regardless of the choice, a simple and common way of applying metrics is based on the analysis of the findings observed after the execution of the tasks. For this to be successful, during the planning phase of the tests, it may be helpful to start by listing all the goals and tasks to be performed by the participants, as well as the expected success criteria for each one of them.
Examples of goal, task and expected success for the planning phase in usability testing.
Imagine a scenario where you want to evaluate how to increase the number of purchases of a certain online course. This flow involves several steps, from finding the course on the website, browsing the course page, registering or logging in, completing the payment and finally receiving the purchase confirmation. A useful recommendation is that, during the test planning phase, the flow should be broken into smaller tasks that will be listed and analyzed separately. Then, in the execution phase with the participants, the tasks can be added to a spreadsheet, using tabular format as a basis for writing down all the findings and observed usability problems.
Example of how to mark down the success of each task when running usability testing.
As seen in the image, the metric used in the example was the success rate in executing tasks. Using semantic colors to highlight “success” (green), “had difficulty” (yellow) and “failure” (red) help identify immediately where usability issues are focused. In addition to marking down whether each participant was able to complete a task or not, it is important to add notes whenever they are relevant to solve usability problems, if any.
Analysis of Findings to Prioritize Improvements
After listing all the findings, the next step is to organize them according to severity ratings. This can be done considering criteria such as:
- The frequency with which the problem happened among the participants: was it common or rare?
- The impact that the problem has on the flow: can users overcome it or is it a barrier that prevents them from continuing to execute the task?
- The persistence of the problem: is it something that will only be problematic at first, or will it continue to impact users over time?
Taking the scenario and images previously shown, it is possible to combine the above criteria with the success rate and usability heuristics to support the analysis and define a severity rating for each task performed. By doing so, it is more likely to secure a user experience assessment based on technical criteria rather than subjective opinions.
Depending on the context of the product or service being tested and the objectives set, ratings can be based on a scale of absolute numbers, levels of criticality such as “High, Medium and Low” or even percentages, as in the following example.
A severity rating example that can be applied when analyzing usability testing results.
Another relevant aspect to be taken into account is the estimated effort to execute the action steps resulting from the tests, as some of them may represent issues that are easy to be fixed and others may require complex features to be built. At this point, collaboration between teams such as Design, Development, Product and key stakeholders is essential. Finally, it is also important for teams to jointly determine how much each finding impacts the business: a problem observed in the experience with the search filter may not have the same relevance as a problem in the payment flow of a website, for example.
Some prioritization methods can help with this collaborative assessment. One of them is the “Impact–Effort Matrix”, which is quite suitable for the evaluation aspects mentioned here. This matrix has four quadrants: quick wins, big bets, fill-ins and, money pits, that reflect the relative value to the user against implementation complexity/effort. In general, projects should prioritize items assigned as “quick wins” and avoid “money pits”, while “big bets” and “fill-ins” items should be carefully evaluated.
An Impact-Effort Matrix can be a useful tool for prioritizing findings from usability testing.
By using such methods, it will be easier and more accurate to decide, as a team, what should and what should not become a priority action in the product roadmap. An impact–effort matrix assigns items to one of four quadrants: quick wins, big bets, fill-ins, and money pits.
After all, it is always good to remember some of the benefits and outcomes of running usability testing:
- Improved user experience, product performance and adoption: by uncovering problems and opportunities, companies can address them to offer easy-to-use, efficient and intuitive products or services that are highly valued by users;
- Increased customer satisfaction: by providing a more user-friendly experience from the usability tests findings, companies are creating higher levels of satisfaction and loyalty;
- Reduced development and support costs: by identifying and addressing problems early in the development process, organizations can avoid the need for costly redesigns and rework.
In addition to making the right decision to test throughout product development, having clarity on the objectives of a usability test is a crucial starting point. When you know what you want to discover from users, you can determine the most appropriate methodology and plan efficiently the results analysis
Want to Know More about Usability Testing?
Follow these links to read relevant content on the subject:
This piece was written by Marina Gurjão de Carvalho, UX Designer at Encora’s Experience Design Studio. Thanks to Flávia Negrão, Aline Arielo, Leonardo Lohmann and João Caleffi for reviews and insights.
Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.