Blog 1 in 3 Part Series on Analysis of Bias-Filled Data
So you’ve designed the perfect customer feedback questionnaire, sent it out to your entire customer base and the responses are flying in. You might be getting excited about analyzing the incoming data but not so fast! In any kind of survey endeavor, especially in customer experience feedback, the analyst must be conscious of the bias present in the data collected. Before discussing techniques to identify and then correct for bias in the data (the second and third parts of this blog series, respectively), I’ll outline the different types of bias that are present in our field.
Two types of bias are a part of every data-based experiment: random bias and systematic bias. Random bias is always present when measuring customer experience or any other behavioral process; people will respond differently based on unpredictable processes such as life events, mood or even the weather! Because random biases can be assumed to fluctuate within the sample, they should not slant your data in any one way.
Systematic bias, on the other hand, skews survey results in a particular direction away from true population values. For example, if you only sent questionnaires to clients from a specific region or ethnicity, you can be sure that their answers will vary in a certain way from those of all customers. This systematic bias that is introduced in how a sample is constructed is called sampling bias.
Of course, few market researchers will intentionally omit certain groups from their survey invitations. But because we are rarely able to use probability-based sampling and instead, collect all survey responses that come in (often called convenience sampling), the sample that emerges is far from representative of the overall population. This will always invite another form of sampling bias: self-selection bias, which occurs because the people who elect to respond differ meaningfully from those who do not. Respondents tend to have more favorable opinions of the company than non-respondents and there is still debate on whether certain cultures or ethnicities are more likely to participate in surveys. Regardless, we must understand that when we analyze customer survey data, we are studying the most engaged group of customers and that, unless we utilize techniques to adjust for this bias, we may only generalize our findings to this smaller group.
Waypoint’s focus on the non-respondents is what differentiates our methodology from the rest of market research. Rather than ignoring this group and solely analyzing respondents data, we know that much insight can be found in searching for which customer traits are most significant in predicting survey response. These factors shape the group that the company most needs to energize and engage – the next step is to follow-up with typical non-respondents to see what went wrong in their experiences.
What do you think? Are you aware of different biases in your customer experience data and how do you react to them?