myspace views counter
Close

Search Waypoint Resources

Search phrase

Filter by resource type

-- Article --

Adjusting for bias in customer survey data: a case example

Posted on March 1, 2012 , by maxswaypoint
CATEGORIES: Loyalty Research
TAGS: 

Blog 3 in 3 Part Series on Analysis of Bias-Filled Data
Visiting a city for three days does not give one enough information to make claims about its country’s weather. It is just as dangerous to make conclusions from customer experience feedback without treating the bias that may lie within. In the first post of this series, I discussed different types of bias and particularly the importance of self-selection bias in customer experience data. In the second post, I offered tips to pre-treat your survey to increase response propensity and identify underlying bias. Today, I will share techniques to adjust your data for this bias in order to minimize its effect on your survey results.
Most of the adjustment techniques common in customer experience surveys center around pinpointing which groups are under-represented in the data and assigning weights to these groups to adjust for their lack of response. The weight is the ratio of the representative count of one subgroup (given from a census or known population parameter) to the actual count. Say you have 100 respondents in a customer survey, but 75 of them are women and 25 of them are men. If you wanted to use this data to generalize to the larger population (or make predictions about future customers), you could multiply all the data for men and for women by the following weights:
Weight (men) = Representative Count (men) / Actual count (men) = 50 / 25 = 2
Weight (women) = Representative Count (women) / Actual count (women) = 50 / 75 = 0.67

This plot shows that customer experience attributes differ between RP groups


While this is a common method (and certainly useful), it has a number of limitations, chief among them the inability to weight for multiple variables simultaneously. Logistic regression is a useful technique in this regard as it can evaluate the relative importance of a large number of independent variables to survey response (the dependent variable). Several other techniques utilize logistic regression in order to correct a predictive model for self-selection bias, including sample selection modeling and Heckman correction modeling. The idea in both of these approaches is to create two models: one predicting survey response (the response model) and one predicting some key outcome (the outcome model).  The response model’s regression coefficients are used to correct the outcome model for selection bias. These tools have been established and validated in both academic journals and industry practice.
We took this approach out for a spin with a dataset from a recent client and found some interesting trends. A response propensity (RP) score was calculated for each contact (respondent and non-respondent) in our contact base, based on logistic regression coefficients from the response model. Three segments of contacts were created: those below, at, and above the median RP. The survey data from respondents from each segment were analyzed for differences and while our results are still preliminary, we see definite distinctions for certain questions. The plot above shows that the High-RP group (the contacts who were defined statistically to be likeliest to respond) actually have a lower rating for Ease of Doing Business than the Low-RP group (those contacts defined to be least likely to respond). Without using an adjustment described above, our overall Ease rating will be pulled downwards by the fact that the Low-RP group is so under-represented. Your mileage may vary of course – the simplest way of avoiding this problem is by raising the response rate in the least-likely group.
What do you think? Do you use any other techniques for adjusting for self-selection and other biases?