Search Waypoint Resources
Surveying Your Customers for NPS or Feedback? Consider these critical questions to ask a prospective vendor.
You’ve invited a supplier to demo what is supposed to be the latest and greatest Customer Feedback or NPS software… What questions should you ask? Here are 9 suggestions to drive a discussion with the provider that goes beyond feature-function to understand what they truly deliver:
- What are the measures of success you typically recommend?
Why it matters: What outcomes should you expect with this vendor? Surely you have expectations around the results… you’re not bugging your customers to only acquire data in search of a problem, right? We KNOW that surveys without action do more harm than good, so best to align with your vendor on success measures from the start.
- Are they speaking about tactics like bounce or abandon rate, number of responses, or acquiring Net Promoter Score?
- And/Or, is the vendor talking about business outcomes such as retention rate, engagement rate, and customer relationships? How will you know if the Net Promoter Score is “statistically significant” (your leadership will ask!) ? What processes do you need in place to make the technology produce those results?
- Leave this conversation knowing where the ROI comes from (and be sure to account for Total Cost of Ownership, including human capital).
- How easy is it to integrate this software into my organization – what do I need to focus on to drive the results?
Why it matters: What am I really signing up for — How big is the lift and do I have resources that can take this on?
- Are they talking about system integration, such as integration with your CRM or Customer Success platform?
- And/Or, are they talking about process integration and what must change inside your organization in order to obtain the results?
- Leave this conversation knowing if the vendor has experience with change management — new tools/systems always require something to change in order to acquire results… how well does your vendor understand this imperative?
- Tell me about your implementation experience: How long does implementation take and what are the key milestones?
Why it matters: How will the vendor help me to acquire the results?
- Are they talking about durations (“Up and running in 1 day!”)?
- And/Or, are they talking about process definition/improvement, internal education (beyond product training), oversight mechanisms? What is the level of interaction between you and the vendor during this critical phase?
- Leave this conversation knowing if the vendor/provider will be a resource for you: Do you really know how to design the questionnaire, the customer-communications, the follow-up approach and tactics? Do you have the expertise to design your feedback program essentially via self-service, or how is your vendor/provider going to assist with best practices?
- What kind of support, education, and product training are included in the base price?
Why it matters: How will the vendor help me obtain my desired results? Are there “lessons learned” / best-practices that I should be considering, and how do I access that expertise?
- Are they talking about product training? Is there an additional cost?
- And/Or, are they talking about best-practices / lessons-learned to educate the required constituents on the “Why, What, and How” of customer feedback? Are there separate “success services” that the vendor provides, outside the SaaS subscription price, and how are those services available after go-live to course-correct over time?
- Leave this conversation knowing how the vendor will support you during the critical “adoption” phase necessary to acquire the results.
- How do I maximize adoption in my organization?
Why it matters: We’re all in the Customer Success business these days, including your vendors, and most are keen to focus on adoption (usage) of the technology. In this question you’ll want to understand how the vendor measures their own success? Is about sending surveys, or accelerating profitable growth?
- How does the vendor think about “adoption” when it comes to running surveys? Do they define “adoption” as usage/frequency of sending surveys? Are they looking to sell “survey builder” seat licenses?
- And/Or, are they thinking about what the survey results actually mean and getting internal audiences to take action on the results? Are you merely “measuring” (and annoying your customers in the process because there’s no sense of action), or are you seeking to drive improvement throughout the organization — how will your vendor help you drive “adoption of the insights” (i.e. acting on the feedback) that should be gained from executing a survey in the first place?
- Leave this conversation knowing the extent to which the vendor sees “adoption” as utilizing the survey results to drive internal process improvement, or if they are merely interested in having you (the customer) build surveys to your heart’s content. Your customers would certainly prefer the former, as more surveys aren’t usually the answer.
- How do we ensure the right surveys are sent to the right people at the right time?
Why it matters: People change jobs all the time. When your company closes a deal there are undoubtedly multiple stakeholders involved… how do you keep your CRM database up to date such that you are getting feedback from the right people? And how much manual effort is involved to send a survey to a given set of contacts?
- Does the vendor have a mechanism for triggering the survey to be sent to a designated contact? Thinking about NPS and the relationship lifecycle, how does the vendor enable you to collect feedback at the relevant points along the customer lifecycle, where there aren’t usually triggering events? What about “survey toxicity/fatigue” — which is commonly thought of as “over surveying” — how does the vendor ensure that any given customer isn’t bombarded by surveys?
- And/Or, does the vendor have a scheduling mechanism for collecting the right feedback along the customers’ lifecycle? Can the survey be scheduled by “anyone” (including CSMs or Ops) at an account or contact level to empower the team through active listening? Is there oversight to understand how well a given set of accounts (a territory owned by a CSM, for example) is listening and following up? Can the survey be tailored to ask only the relevant questions at the right time?
- Leave this conversation understanding the level of manual effort to execute an ongoing survey/feedback program. If you must manually build a list and upload contacts then the high level of manual effort will likely cause problems down the road. Consider ongoing feedback programs that are “always on” and continuously listening for bright-spots and hot-spots.
- How do CSMs action the data at the account level?
Why it matters: Most of us tend to think about surveys in the aggregate/cohort-level. Traditional approaches collect a bunch of feedback and then analyze the results for patterns that should be addressed. We’ll then emerge with our beautiful set of insights and expect the organization to stand up and act… This can take quite a long time, and meanwhile your customers are waiting.
- Does the vendor have a mechanism to aggregate feedback in a single account in order to help you course-correct? Are the results easily accessible at the account-level, or would a CSM need to filter/slice-and-dice survey results to get the bits they are looking for?
- And/Or, does the vendor recognize that B2B firms generally have people assigned to work with accounts — it’s their job — and have a mechanism to aggregate feedback in a single account in order to help you drive Success Plans?
- Leave this conversation knowing how the follow-up will happen. Results/ROI can be had quite rapidly at the account level first by demonstrating that you are listening and care, and second by actually addressing their feedback. If you are waiting for grand initiatives to materialize based on the aggregated feedback, consider this quote from McKinsey & Co, “The academic research is really clear that when corporations launch transformations, roughly 70 percent fail.” If you’re waiting for a new feature or process improvement then you (and your customer!) could be waiting a long time. You might instead prefer to address what your customers are telling you by setting expectations and provide best-practices or work-arounds to improve their experience in the short term.
- How would a program manager access the data in your system to provide insights to management that would drive cross-functional actions across the company?
Why it matters: Companies suffer from “death by a thousand cuts.” There are so many “opportunities for improvement.” But just because customers complain about something or rate a particular area poorly doesn’t mean that you gain the biggest bang-for-the-buck when addressing it. Customers might complain about long wait times for support, but if they are confident that you’ll solve the problem on the first reply then they might be more patient.
- Does the vendor understand the statistics of key-driver analysis and how to apply it to your data set? Do you need a masters in data analytics to get to insight, or does the platform surface patterns for you?
- And/Or, does the platform provide for statistical analysis to help you determine the optimal priorities? Since not all customers are equal (let’s face it… more money often means louder voice) how does the platform bring financial data into the picture? What about root-cause: Since customers can only provide symptoms (what they’ve experienced, compared to what they expect) it’s your job to understand where a gap occurred so you can remedy the correct upstream process… how does the platform help you accomplish this critical task?
- Leave this conversation knowing that you won’t have a massive list of unprioritized “opportunities.” Showing your budget holders the amount of revenue at risk, the upside potential, or efficiency gains in monetary terms is far more likely to get them paying attention. A short list of improvement initiatives, prioritized by the thing that matters to the CFO (money!) is the cost of entry.
- How does the platform trend data over time?
Why it matters: Trending results quarter-over-quarter generally means showing results from different customers at different points in time. Are you comparing apples-to-apples in that comparison — i.e. that you have a scientific sampling strategy that is showing the comparison over time from like-customers? Or, say a customer is “passive” in the Net Promoter language… if you knew that the customer used to be a detractor then understanding that this is an improvement is a big deal.
- Does the vendor understand sampling methodology, or is the recommendation to combine apples and oranges into a single “orple” (which doesn’t make sense, right?) and trend those groups over time together?
- And/Or, since NPS programs are continuous and “always on,” what is the mechanism for trending data from the same accounts and contacts over time? Make sure your vendor understands your customer’s lifecycle and show how the customer’s sentiment can change over time, merely because there is evolving context as the customer gets better educated with your solution. Hopefully the sentiment is positive and remains so over time, but as customers learn more about you there’s also the possibility of missing their expectations… do you have a mechanism for spotting those concerns before they become large financial hardships?
- Leave this conversation knowing how you will trend your data. Will you create orples? Or, will you be able to show the percentage of accounts and contacts that are improving or declining in their sentiment? If you knew an account was declining, wouldn’t that provide additional context for addressing their feedback?
BEWARE: You may be tempted by free or near-free survey tools. But remember that you get what you pay for: If you want to add value to your company then NOTHING is free because:
- You’ll be putting effort into it.
- Your customers will be putting effort into it. Are you sure you want to “study” them, or do you want to “engage” them?
Side note: Nothing in this list should presume that “more capability” means “more expensive.” This stuff isn’t rocket science so there is no need to pay for rocket science. How well the vendor can scale their offerings can influence price. And, just because a tool is more expensive doesn’t mean “it’s better or worse.” Product Pricing is usually owned by Marketing and often a reflection of target market and what the vendor wants you to believe.