In November of 2019, I joined LexisNexis as the first full-time researcher on a search engine product serving legal researchers and librarians at large law firms.
This product was going through a migration process to a new design and platform. There were some technical and design problems with the new version of the product that needed to be addressed before more users could be migrated to the new product. There were many known usability issues with the new product, but the product and UX teams were having a difficult time prioritizing what to address first.
Assess overall usability of the product to understand which specific issues to focus on. Additionally, benchmark the current design in order to compare any new design prototypes to the current version
I used the Pragmatic Usability Rating by Experts (PURE) Method to assess the overall usability of the product. The primary UX designer, a fellow UX researcher, and I served as the expert evaluators for the PURE analysis. We did this twice- once with our Novice User persona, and again with our Power User persona.
My Role in this Project
I determined the primary tasks for both of our PURE evaluations and the ideal steps for each task. I set up a reporting spreadsheet for the PURE analysis that tracked each evaluator's scores. Additionally, I ran each session by demonstrating the "happy path" of steps and letting my co-evaluators know where each step began and ended. I also served as an expert evaluator and assessed each task in the session. Finally, I analyzed our scoring of each step and task and collaborated with another UX Researcher on writing a report of our evaluation.
Our final report explained the overall scoring for both the Novice User and the Power User. The scoring was broken down by task and step for each persona. From there, I evaluated the scoring of the tasks and steps for both personas to uncover common usability issues within the product. We ended up with a clear idea of what needed to be addressed in future design work.
I presented this assessment to the product team (product owner, product managers, engineers, UX designers, and visual designers) along with our list of usability problems prioritized by severity and complexity. From there, the team and I assessed the current product roadmap and realigned it with the results of this PURE analysis.
In the future, if I was to use the PURE methodology again, I may do things slightly differently. I think it would be interesting to run the analysis with two groups:
Group 1: Other UX team members/experts who have a strong understanding of usability but are less "close" to the product than the team of colleagues I did the PURE analysis with this time were
Group 2: The same group of colleagues who work closely with me on the product and are intimately familiar with it
From there, I would facilitate discussion of scoring between both groups of experts and amend the scores to show both group's reasoning. This would give us a more robust scoring and would mitigate the possibility of the group closer to the product either missing issues or scoring the product too harshly.