AJ Griffies

NPS Detractor Interviews

Background Information

In November of 2019, I joined LexisNexis as the first full-time researcher on a search engine product serving legal researchers and librarians at large law firms.

The product was going through a difficult migration process. In order to get a sense of what the Net Promoter Score (NPS) rating would be on the new version of the product, we ran a survey with all migrated users. This gave us an idea of what we would expect from the official NPS survey rating once more users were migrated.

The result from that survey showed a low NPS score. There were some usability issues with the product that we were aware of that explained some of the low ratings. I went through and coded verbatim responses to get a better sense of what the current issues with the new product were. However, many of the write-in responses were vague, too general, or hard to understand what in the product the user was criticizing.


I wanted to uncover the range and variety of usability issues that effected users' workflows in the new version of the product and how they impacted NPS.


I conducted interviews with twenty users who responded to the NPS survey and gave detractor ratings. These interviews were individually tailored to each participant based around their responses to the prior survey and were semi-structured.

My Role in this Project

I planned, recruited for, moderated, analyzed, and reported the results of these interviews.

The product stakeholders sat in on many sessions to observe and hear the feedback from users first hand.


I synthesized the sessions and reported the results to the product team (UX designers, visual designers, project managers, engineers, and the product owner). There were several themes we heard repeatedly from participants that allowed us to better understand the variety of issues that users have with the new product.

Additionally, the interviews brought to light the fact that different types of customers use the product in entirely different ways and run into completely different usability issues. This is in part due to differences in their own knowledge of both our product and the legal system in general. Prior to this study, the product team had predominately focused only on one type of user, and this study drove home the importance of the product being optimized for all kinds of users.


If I had this project to do over again, I would have worked to cross-reference issues that arose during the interviews with behavior data from our data analysts. For example: If a number of users we interviewed had trouble using the filters when searching, is there a way we could look at the data analytics to detect the overall prevalence of this usability issue?