Measuring Customer Satisfaction
Customers are the only reason businesses exist. We all know that.
To stay in business, though, we need to find out what our customers want, what they are receiving and the size of the "gap" between the two. It makes good sense to ask customers what they think.
A basic customer satisfaction survey can provide a lot of the information you need to run your in-plant. It can identify areas where you are doing well and areas that need improvement.
Planning a survey, though, can be intimidating. There are so many things you want to know. How can you ever hope to develop a concise, easy-to-complete document that people will actually use?
Another complication is that because surveys can be used for a variety of reasons (e.g. measuring customer satisfaction, awareness of services, defining service requirements), survey designers often start with one objective and end up with something completely different. Most successful survey designers live by the KISS Principle: Keep It Simple, Stupid!
A customer satisfaction survey is a good place to start. It can tell you where you stand and help identify areas needing improvement.
Many researchers argue that five or six dimensions can define customer satisfaction: Timeliness, Responsiveness, Convenience, Location, Availability, Overall Quality and Overall Satisfaction. By crafting one or two questions around each of these dimensions, a researcher can get a pretty good idea of the level of satisfaction of the folks being queried. These are not the only relevant dimensions, but they are a good place to start.
Timeliness refers to whether or not the time required to complete the transaction from the customer's perspective was acceptable. We're not looking for shortest time; we're looking for on-time—an important distinction.
• Was your work delivered on time?
• Was your work ready when you needed it?
Responsiveness refers to your demonstrated follow through. In a production environment it could refer to how well you responded to issues, concerns and problems encountered during production. Were you able to make last-minute changes? Did you complain about having to make the umpteenth proof? Did you follow through on questions about the status of the order? And so on.
Responsiveness for customer service, design, management and other "soft" services could refer to returning calls or answering e-mails, providing information on prices or schedules, or following up to see that issues raised earlier were addressed.
• Were your requests for information answered promptly?
• Were you able to reach the people you needed to reach to work on this order?
Convenience deals with the customers' ease of using your services. Location refers to the physical place of the point of contact. Convenience and location overlap in many ways, but they are also distinctly different. A shop with a well-developed electronic job submission system, for example, may be located at a considerable distance from the customer and still be very convenient to use. Similarly, a shop located very close to a customer could be very difficult to use and thus be inconvenient.
Sample convenience questions:
• Did you find our services easy to use?
• Did our electronic job submission process save you time?
Sample location questions:
• Did our location cause problems for you in placing this order?
• Is our location a factor in helping you fulfill your printing needs?
Availability deals with the extent to which the service is available when the customer needed it. A service might be convenient (easy to use) but not available, or it could enjoy a close location but not be open when needed. Testing for availability tells you whether or not your hours of operation and staffing schedules meet customers' needs.
Sample availability questions:
• Were our services available when you needed them?
• How long did you have to wait to see a customer service representative?
• How long did you have to wait to use the copier?
Overall quality is exactly that. This is the dimension where we ask customers evaluate the actual product or service. Remember that quality is defined by customer expectations, and the best way to know whether or not your customers think you are doing high-quality work is to ask them.
Sample overall quality questions:
• Were you pleased with the overall quality of your order?
• Did the quality of our work meet your expectations?
Overall satisfaction is your "grade" from this customer. Here you are asking for an overall assessment of how well you met the customers' needs.
Sample overall satisfaction question:
• Overall, how satisfied were you with this order?
Questions That Work
How you ask for information is as important as what you ask about. Poorly worded questions can skew the results of your survey. Here are some things to avoid:
1. Balance positive and negative questions.
The way a question is presented directly affects the answer. Consider the connotation from asking the same question in two different ways in the following statements:
- My job was delivered on time.
- I didn't get my work when I needed it.
Both statements address the same point, but the use of the negative term in the second one may cause respondents to think missing deadlines is a common practice. If your survey asks one question per dimension, use all positive or negative statements. If you ask two or more questions per dimension, you can use both positive and negative statements as long as they are applied equally, that is, one positive and one negative question per dimension.
2. Limit questions to only one dimension.
Avoid questions that address two dimensions. Consider the question "Was the customer service representative courteous and knowledgeable?" How would the respondent answer if the CSR were knowledgeable but rude?
3. Avoid ambiguity; be specific.
To be useful, questions should address specific areas of performance. What components of "responsiveness," for example, concern you? The question "Please rate the responsiveness of our staff" is vague and general and will yield little useful information. Try to develop more focused questions, like "Requests for information about my order were answered promptly."
4. Be sure the question is relevant to what you're trying to measure.
First-time surveyors sometimes develop lists of questions without considering what the question measures.
Structuring Your Questionnaire
The final step before assembling the actual survey is to decide how to present your questions and how to analyze the responses. Checklists and Likert-type formats are frequently used for customer satisfaction surveys.
Checklists offer the respondent a true/false choice for each question. Tabulation of the results is a straightforward process that yields a numerical score. The higher the number, the more favorable the results. While easy to administer and analyze, checklists have limited use because they do not measure levels of satisfaction.
Likert-type scoring provides insight into the strength of the level of satisfaction by allowing the respondent to select levels of satisfaction. Surveys using Likert-type measures typically present a number of choices ranging from very satisfied to very dissatisfied. Some researchers use as many as 10 levels of satisfaction, but for our purposes, five levels, where 1 is very dissatisfied and 5 is very satisfied, are usually sufficient.
Anecdotal And Demographic Data
Anecdotal information—that is, information without statistical significance—can be useful. Some companies view complaints as "market research we don't have to do." Anecdotal information can be viewed similarly.
Asking open-ended questions is a good way to begin to understand your customers' concerns and your successes. Consider adding one or two open-ended questions to allow customers to express perceptions that may have been missed by the survey.
• What one thing could we do to improve our services to you?
• What services would you like to see us add in the future?
Demographic data identifies useful characteristics of the respondent and allows closer examination of your survey results. Stationery customers, for example, may be less satisfied than customers who use process color printing. Or offset customers may be more satisfied than customers who typically use your digital services.
After you have designed your survey, you must use it in a way that removes as much bias as possible from the results. Two approaches frequently used to control bias are universal surveying and sampling.
In universal surveying you survey the entire population you're interested in studying. Sampling refers to selecting a smaller, representative group. If the samples are selected randomly, then measures of the sample are likely to accurately reflect the perceptions of the population.
A practical approach would be to limit your sample size to 300 or 400 people in large organizations and as few as 100 in organizations of a couple of thousand employees. Random selection can be a straightforward process ranging from drawing names from a hat to generating a random-name list with statistical software.
Use Your Results
Once you've completed your survey and tabulated the results, what do you do with them? Surveys that are not used waste everyone's time.
One of the objectives of a customer satisfaction survey is to identify opportunities for improvement. After the surveys are collected and the data is recorded, analyze the results. Discuss the findings with team leaders, customer service reps, and other key stakeholders and look for meaning. Did customers seem to be less satisfied with one dimension? If so, why? Call several customers and ask follow up questions?
Finally—and perhaps most importantly—share the results of the survey and the improvements you develop as a result. Write and distribute a follow-up letter, publish the results on your your Web site and include them in your planning and reporting documents.
Let your customers know that you listened, that you value their feedback and that you've used that feedback to improve your business.
Ray Chambers, CGCM, MBA, has invested over 30 years managing and directing printing plants, copy centers, mail centers and award-winning document management facilities in higher education and government.
Most recently, Chambers served as vice president and chief information officer at Juniata College. Chambers is currently a doctoral candidate studying Higher Education Administration at the Pennsylvania State University (PSU). His research interests include outsourcing in higher education and its impact on support services in higher education and managing support services. He also consults (Chambers Management Group) with leaders in both the public and private sectors to help them understand and improve in-plant printing and document services operations.