Friday September 09, 2011
A few rules for creating your customer satisfaction questionnaire

Ask questions in a structured way
When creating your customer satisfaction questionnaire, it's just as important to have a structure for yourself as it is for your respondents.
For yourself, because it ensures you don't miss a key question.
For your respondents, because there's nothing more frustrating than not understanding the logic of a customer satisfaction questionnaire.
For customer satisfaction questionnaires, the order of questions should generally follow the logic of the customer experience.
Example for an in-store purchase:
- Store location
- Shelf layout
- Vendor serviceability
- Checkout
- After-sales service
As in the example above, the questions must also be grouped by satisfaction attribute.
Keep the number of questions to a minimum
What questions will I actually use?
Many of our customers use their customer satisfaction questionnaires to ask all kinds of questions, out of curiosity (or sadism?), without really caring about the analyses they'll be making later on. We see cases where dozens of questions are asked, but in the end, only 2 or 3 indicators are tracked.
Are the questions really about customer satisfaction?
A customer satisfaction questionnaire has just one objective: to measure customer satisfaction. It sounds simple, but just because a question ends with "Rate the following from 1 to 10" doesn't mean it's a customer satisfaction question... that is, about the customer experience whose satisfaction you want to measure.
While this may seem hard to guess a priori (except in obvious cases), it's easy to measure a posteriori. To find out, it's usually sufficient to calculate the correlation coefficient between the question asked and overall satisfaction with the customer experience.
Are the questions relevant?
It's very easy to multiply the number of questions asked ad infinitum (and beyond). But is this relevant? And how many questions is THE question too many?
Once again, it may seem difficult to guess at first sight... but when we analyze the results, we look at a fairly simple indicator: variance. If, on average, respondents to a group of questions (revolving around the same attribute) vary their scores very little, then it's highly likely that the number of questions is too high, or that the questions are not relevant enough.