Search Waypoint Resources
The Importance of Collecting NPS and Gathering Feedback for the Product Team
Written by Mark Pecoraro
Product teams have many different ways to get input on product direction: strategic direction set by the Board/CEO, sales requirements (that next BIG deal), market strategy, and the list goes on. They also get a mouth full from the Tech support team, that generally operates a reactive break/fix posture. However, as a Customer Success professional, I have learned that I have an opportunity to take the voice of the customer and be the conduit back to the product team to drive and prioritize where the focus of the product needs to be, from a customer perspective.
Having a “good” product is critical for customers to be successful and keep Customer Success Managers out of being a mis-labeled support team. What do I mean by “good” product? (we all think our product is awesome, right?) A “good” product is one that delivers what sales promised, deploys easily & quickly, is high quality, has an intuitive UI, plays nice in your ecosystem of data & applications, and ultimately provides the end customer with a valuable outcome and user experience for which they originally purchased the product. For products that fall short in these areas, no reasonable amount of CSMs can be a band-aid that will scale up with your company. I once had a CEO tell me about his 30-person CS team, of which 20 were really tech support folks “masking product deficiencies”. Having seen that movie before, I knew his CS team had a long painful road ahead of them.
So, how can you take the voice of the customer efforts, and use that hard-data, from the mouth of paying customers, to drive product changes?
Most of us are familiar with Net Promotor Score (NPS), and the resulting data that gives you Promoters (score 9-10), passives (score 7-8) and detractors (score 0-6). You ultimately end up with a score that ranges from -100 to +100. NPS usually has a comment field that asks why you gave the score you did. Using these comments, both good (promotors) and bad (detractors), can prove to be extremely vital feedback to your product team.
Recently, I completed a CS Automation implementation for a SaaS eCommerce company. They had no Voice of Customer programs, so that gave us an opportunity to start with the basics. We implemented some transactional surveys starting with how their on-boarding experience went and how we did in technical support. They also had never done a NPS program, so using the new CS automation platform, we started sending out NPS surveys to the customer base of over 3000.
In this specific VOC example, the NPS efforts were very effective at getting us data on product related issues and we suspected this would be the case from the on-set of the program. The reason for this was our NPS survey was largely targeted at hand-on users of the application, which meant for this case in particular, we would receive feedback from 1000s of users within the enterprise customer base. This effort was distinctly different than the transactional surveys we executed at the end of the on-boarding process and after each technical support case was closed. Those surveys were targeted at feedback for very specific interactions we had with the customer. The transitional surveys had multiple questions (4) and were specifically targeting feedback on our performance so we can find areas to improve upon in our processes. The NPS effort was an on-going program, intended to monitor the progress of our relationship with the customer, and the systemic root causes of that feedback. Once an NPS survey was responded to, we didn’t survey that specific user for a minimum of another 120 days. Over time, we intended on getting users to continue giving us feedback so we could track the NPS score progress, but just as important, our progress with the underlying reasons for the ratings, and a closed loop process into the company to take action on the primary drivers of the NPS VOC program.
After three months of data collection, the NPS score for the overall customer base left something to be desired. The real question that needed to be addressed was “What was the root cause driving the NPS number?” We have tons of comments associated with the ratings customers had given us, but we needed to figure out how to distill that down into something that’s actionable. We created a taxonomy that allowed us to categorize the comments into some basic buckets. Having been in the software industry for long time, I’ve come to realize there’s a very limited number of buckets that these comments fit into, although it seems overwhelming when you’re just looking at the raw data. We used a pretty simple, but effective taxonomy. These tags can be associated with any score from 0 to 10 and any comment can have as many tags as it takes to get the essence of what the customer said. If the rating was low, you can bet that the tag had a negative connotation. However, if the comment was associated with a high rating, then it probably had a positive connotation.
Sample NPS Tagging Taxonomy
|Prod – UI|
|Prod – Features|
|Prod – UX|
|Prod – Performance|
These tags are fairly generic to most SaaS companies. Your company might add, subtract or modify this list for your business, but for the most part, this encapsulates the main buckets that I’ve seen for most SaaS/software organizations. As we would get responses back from the NPS surveys, we had a process to look at all the comments, regardless of score, and each comment would get one or more tags that was appropriate for the comment. As an example, you could get a promoter that loves support and a detractor that hated support, and they would still get the tag “customer support”. With the comments now normalized with these tags, you could start to take groupings of say the detractors, (the ones that rated 0-6) and see what tags were most prevalent in those comments. The same thing would apply to folks who scored us a nine or 10. We would then rank order how many times the tag was used in those feedback buckets. With this data now neatly organized and presentable, we were then in a position to start to push it back into the organization. The CEO was absolutely thrilled that he had data he had never had before: A 360° view the customer, their health, and some simple voice of the customer programs that unfortunately showed there’s lots of work to be done. Customer Success leaders were now in a position where they were not just handing over raw data but also presenting the customers’ priorities from their own voice in the form of feedback about the company. When we presented this to the product leadership, they saw this as an incredibly positive opportunity to take action where we were having the biggest challenges in the business.
The data doesn’t lie.
The data became an additional, informed vector for product to now influence decisions about priorities and resource allocation. There were clear themes in the data that was showing certain product issues were causing negative impact on the customers. By being transparent with this data, we were able to help influence & prioritize feature requests, invest more time and energy into quality, and focus on other areas that were points of concern for our customers that they expressed through their own voices. The product leadership was so interested with this data, they asked this customer success team to actually start setting up feedback sessions with the customers so that engineering could dive deeper into some of the customers that had further valuable input. Based on my experience, the importance of gathering VoC extends to more than just retention and upsell opportunities. The value of collecting feedback and proving to your customers you will take action with it in areas such as product, can have incredible effects on the next iterations of your product, in turn improving your company overall.
To learn more about our guest blogger Mark Pecoraro: