If you’re a designer, you might be asked to conduct an expert review of a website or application to find usability issues or opportunities to improve the experience. Stakeholders will want to see evidence to support your review, so I thought I’d share a quick and dirty method of calculating a usability score.
There is an ongoing debate in the usability community over what is and what isn’t a heuristic evaluation. Although expert reviews and heuristic evaluations share a common goal of assessing a designed experience, the biggest difference between the methodologies is the lens used to conduct the review. If a reviewer isn’t examining a design through the lens of established heuristics and is instead compiling a checklist of bugs, fixes, and personal opinions, this isn’t considered a heuristic evaluation.
Although there is no definitive method to conduct an expert review, the report needs to highlight the successes and failures of the design. The review should take into consideration design consistency, style, accessibility, and usability, among other standards.
One method of review I’ve found to be effective is to combine an expert review with design heuristics and provide stakeholders with a heuristic quality score. A quality score is a simple metric that can let stakeholders know roughly how well a design ranks and can help uncover usability issues.
In 1995, Jakob Nielsen proposed ten usability heuristics for interface design that are widely regarded as the industry standard, cited by thousands of institutions including Stanford and MIT. Here are Nielsen’s list of heuristics.
Once you’ve conducted the expert review and compared your findings with design heuristics, give a score to each heuristic out of ten. These scores should be based on the number of issues found, frequency of occurrences, and the severity of issues. Then calculate the mean of all design heuristic scores out of ten. The mean value is your heuristic quality score.
For example, here is the equation to calculate the mean of ten random scores, one for each design heuristic.
The mean score would be calculated as 7.3 of 10
It’s useful to include all of your heuristic scores as well as your justification for these scores in the expert review as your clients or stakeholders will find this valuable. Low scoring heuristics should be a focal point for improving the design. High scoring heuristics provide feedback on what is successful in the design.
The beauty of this approach is it can be applied to almost any set of design heuristics and it will always result in a measurable score that provides practical insights. Calculating a quality score based on the mean score of each design heuristic isn’t exactly scientific, but it will inform stakeholders of potential usability issues and give your expert review more authority.