myspace views counter

Search Waypoint Resources

Search phrase

Filter by resource type

-- Article --

Best Practices in Customer Feedback: Does your Survey Add Value?

Posted on January 9, 2014 , by Steve Bernstein
CATEGORIES: Lessons Learned
Best Practices in Customer Feedback: Does your Survey Add Value?

Most companies survey their customers.  Whether its a periodic “relationship” or Net Promoter type of survey, an ongoing “transactional” survey that requests feedback following a customer interaction, or even a market research study, companies seem to love surveys!

Take a step back and evaluate if your customer feedback process is effective.

Let’s face it – surveying your customers “just because you want a score” isn’t exactly a sound customer strategy.  And besides, if your customers are giving you the gift of their time and insight, shouldn’t you do something with it?  Customers are far more valuable and deserve to be treated better.  (We also have real data showing how surveying customers without taking action actually does more harm than good. More on this in tomorrow’s post).
Here are 5 areas to assess your company’s survey process (Net Promoter or otherwise), oriented toward B2B companies:

best practices customer feedback

Think DIALOG and ACTION, not “survey.” If there’s no plan to act on the findings, why survey at all? And if there are no findings, then you need to fix that before you waste any more of your customers’ time.

1. What is the purpose of the survey – are there clearly stated objectives?  For example, is it used only to present a Net Promoter Score (NPS) or satisfaction score to leadership, or are there actions expected to come out of it?  Our clients generally leverage a survey process in 2 ways – “1:1” for an account team to drive growth in individual accounts (by activating promoters, as one method), and “1:Many” to assist with prioritizing the right improvement initiatives (e.g. product, support, consulting, etc.).  The survey objectives generally fall into these 2 areas.  Are there specific outcomes expected of the survey?
2. Many companies like to tie survey scores to MBOs and employee metrics.  Therefore the survey results need to accurately represent the portion of the business being measured (we often use the word “trustworthy” or even our fictional word “representativeness” to describe this – with a deeper discussion on the topic in a different blog post).  Does your company have a method in place for determining how “trustworthy” the scores are?  That is, does the feedback truly represent the segment (e.g. region, product, account manager/team) being assessed?
3. Many companies like to trend scores from survey results, and see scores that are flat over time or maybe they are even generally trending up.  Separate from any margin-of-error calculations (which was discussed in this earlier post):

  • If the scores are improving, what do you attribute those improvements to (i.e. why are the scores improving)?  Is knowing this important to your company (for example, so you can replicate these “bright spots” to other parts of the business)?
  • If the scores are flat, why is this acceptable?  Doesn’t the company want to see some ROI (results) on the investment in time, energy, and resources?  Don’t you want to have “career building” measurable results for the effort?

4. Related to #3: Are you confident that scores are *really* increasing (or flat), or could selection bias (cherry-picking contacts) or respondent selection bias (for example, only newer contacts respond) be influencing scores?  This is especially prevalent in B2B firms, where account teams can have a large impact on determining who gets a survey and/or how the survey process is communicated to the account.  We’ve written about this with research results.  Is understanding any bias in your data collection process important to the company?
5. Is there a process in place for distributing targeted customer feedback to the account teams (and others) and for following-up on the feedback?  If not, should there be — wouldn’t an account team benefit from understanding the sentiment of the customers they serve? And, in the event of negative feedback, would an account manager know how to handle it (and how do you know)?  Similarly, in the event of positive feedback, would the account team know how to handle it?
Survey programs that fail to enable action should be abandoned.  They are a waste of time and resources, and they erode customer trust.  They mis-set expectations and allow your competitors to gain a foothold in your accounts.
Feedback is a strategic asset – after all, who best knows how to prioritize problems your customers face: does your company really know customer concerns better than customers do?  And once you mine the feedback to understand the issues they face and how your firm’s products and services help (or don’t help), bringing the right solutions to your customers (whether by leveraging existing “bright spots” within your firm, or by developing the right fixes) is a sure-fire way to accelerate your firm’s rate of profitable growth.  Wouldn’t that be a great career-building result?

Does your program add value?

10 Commandments of Voice of Customer

10 best practices in program design, feedback deployment & data analysis

best practices customer feedback