In the customer feedback world we’re often tempted to use average scores to present our feedback (survey) results. I constantly hear things like, “Our average score has moved from 8.25 to 8.31 — we are improving!” Or, “Our average score for tech support is 6.72 but for product usability it’s 5.89, so clearly ease-of-use is something we need to focus on in order to drive customer retention.”
There are significant problems with these statements, many of which are well documented in a book called, The Flaw of Averages (Wiley, 2009) and also discussed briefly on this blog at http://waypointgroup.org/averages-are-just-so-so/. The issue is also briefly discussed in the article These tech worker wages will astound you, where the impact of Mark Zuckerberg’s wealth is discussed.
So nothing new here really, but still worth calling out for our B2B audience as I am frequently asked, “What should I do about feedback from multiple people in one account? Do I just average it?” The answer, of course is a resounding, “NO!” Say you have 3 people in one customer account that gave you feedback, with one person being happy, one being sad, and the third being “neutral” (or “passive” for our Net Promoter NPS fans). Similar to the average American family with 2.2 children (I’ve never met a 20% person, although I’ve been accused of acting that way from time to time), you can’t say that, on average, the account is “neutral.” But here’s what you can say:
- There’s one person in this account that loves us. What tactics can we use to leverage and benefit from that good will?
- There are 2 people in the account that don’t love us. What is causing this, and how can we “convert” them into being a fan (“promoter”)?
(Sidebar… you should also understand if there are people that didn’t respond to your request for feedback — do you really want to just throw them out of your analysis? If they were important enough to be invited to provide feedback, then the fact that they aren’t responding is also a data point that must be considered. We’ve written about this very often, and also created a short video analysis on the consequences of ignoring this group of people.)
Just as NPS can provide a metric for understanding overall company-to-customer relationships, calculating a Net Promoter Score for each account might help you prioritize certain treatment strategies, or perhaps to help you see which accounts are stronger than others. From there you can see what percentage of the business (customer accounts and revenue) is above a certain threshold, and what percentage falls below. But relying on one metric — tempting as it might be — will only provide you with one dimension. And since a person isn’t one-dimensional, be sure to look at other attributes such as the role of each contact in purchase decisions, your company’s account management model, the product use cases in play, etc.
So what are we left to do? If you want something meaningful that produces value-add, then you’ll have to bite the bullet and do some work. Fortunately (warning, shameless plug coming), there’s also another answer in the form of TopBox*, a product we recently launched that, as a core design principle, helps you manage this complexity out-of-the-box. TopBox will automate the drudgery of calculating metrics by segment, by account, or by combination of characteristics. Now you can focus on genuine value-add — driving action that leads to improvement.
On the other hand, if you want to publish results quickly and move on, using averages might just be fine. Just know that the value gained from your work generally corresponds to the effort you put into it, and that the value-added contribution for such work is likely to be just average.