myspace views counter

Search Waypoint Resources

Search phrase

Filter by resource type

-- Article --

Net Promoter & Statistics: When Accuracy Goes Haywire, and 5 Ways to Proceed

Posted on October 12, 2011 , by Steve Bernstein

As a practitioner in the field of Customer Insights / Customer Experience / Net Promoter / Voice-of-the-Customer (what are we supposed to call this field, anyway?!?), I am frequently asked, “How many responses do we need to be statistically significant?”
Statisticians often use a “margin of error” calculation. Depending on your population size this often suggests ~300 responses per analysis segment.  But we can answer the question of “how many do we need” in different ways, with pros and cons for each. Here are my findings, based on my 22 years of real-world experience in this area (and this is certainly a larger topic that I think would be better served as a series of discussions!):
Pros: Confidence intervals are generally familiar and accepted by anyone that sees market research data in the media. People seem to appreciate the idea that “we can be 95% certain that the score is X% +/- Y%.” You can report it and move on.
Cons: Confidence measures assume that you have a representative and random population. Much like in the world of Economics, where textbooks start off, “Assuming a rational world…” we know from experience that most customer feedback programs are not based on random samples that represent the total population. Why?

  1. People are people, not instruments. We have emotions and biases that can’t always be known.

    CONSIDER: If you wanted to build a bullet-proof airplane, would you want to examine the airplanes that successfully returned from their bombing runs, or would you rather examine the planes and pilots that didn’t make it back in order to improve? Don’t you need to make sure you get feedback from disengaged customers?

  2. Who is responding? That is, who is “opting in” to provide feedback? In our experience, scores generally skew positively. That is, happy customers respond more than unhappy customers, who are otherwise likely to be “checked out” or see no reason to participate.
  3. Whom are you inviting to provide feedback? Many programs suffer from bias and unintentionally select “happy” customers. Face it – where you have good customer contact data, you will tend to also have stronger customer relationships. And, especially if you compensate your employees based on customer feedback scores, then the program is certainly going to try to seek out happy customers to provide feedback. Just use your car-dealer experience as blatant example.
  4. What is the right confidence level, anyway? We often see statements like, “At 95% confidence…” That ruler can be generally accepted in the research world where we might be making life-or-death decisions. But would you rather base your decision on evidence or just a hunch? Would 50% confidence be better than 0%?

5 recommendations:

  1. Pay attention to your sampling strategy – whom are you inviting to provide feedback? – and also examine who responded. Make sure both areas represent your business in ALL segments that you intend to act upon. Are you seeking out and acquiring feedback from those who matter most? (And how do you know…? We’ll have to address a response to that in a separate post…)
  2. Recognize that some customers simply are more important to your business than others. Especially in business-to-business (B2B) situations with complex buying cycles, make sure you are talking to the people that matter most.
  3. Pay attention to everyone. While this might seem contradictory to item #2 immediately above, no business wants negative word-of-mouth that destroys growth and profitability. A sample size of 1 can be telling, especially if you leverage that 1 person to understand root-cause (that looks like yet another potential topic for a future post…)
  4. Leverage your strengths. We often tend to focus on the negative. Now that you’ve identified your promoters, engage them! Whom do they know? What are the cross-sell opportunities? What can those customers tell you about your competition?
  5. Context is everything; scores can be meaningless. Whatever you use — net promoter, customer effort, customer satisfaction, etc – you will always need relevant metrics for comparison in order to understand what actions to take. Example: If you step on the scale this week and weigh 170 lbs (~77 kilos), and the week before you weighed 168 pounds (~76 kilos), is that a good thing or a bad thing? In order to answer that I’d need to know more – percent of body fat / BMI, goals (‘thinness’? muscle?), and how you compare to your “peers” (defined by your goals). Scores don’t say much on their own. Similarly, in the “Customer Feedback” world you need to understand your sample and make sure you are comparing apples-to-apples.

As one of my mentors always says, there are a lot of edges to this work.  One short blog-post isn’t going to close this out. Bottomline for me is that if your primary goal is to present data then use confidence measures. On the other hand, if you want to drive profitable growth then consider doing more.  After all, between this ‘word-of-mouth’ age of the Internet and the need to keep our existing customers coming back for more, don’t you ideally want 100% of your customers to be with you (and not against you)?