Techniques

Think like a Respondent to Improve Survey Data Quality

David G. Bakken

The structured survey questionnaire has been a staple of social science and market research since at least the early part of the 20th century. Market researchers rely on survey data to divine the insights that lead to better business decisions. We need to be confident that the survey data on which we base those decisions are both reliable and valid.

We’ve come a long way in the practice of survey research in terms of understanding and managing sources of error such as scale usage bias, but an accumulation of research into the unobservable cognitive processes that come into play when respondents answer survey questions shows that crafting survey questions that reliably elicit the information we think we are asking for is no easy matter.  In fact, the survey question may be the weakest link in the chain of components that comprise the typical quantitative market research study.

As a general rule, market researchers who have not completed post-graduate study specifically in survey design are largely unfamiliar with the research on cognitive aspects of survey taking.  Moreover, I think we are largely unaware of the fundamental asymmetry in mindsets between the question writer and the person who will be answering the question.  When the survey writer uses a phrase like “your household” she knows exactly what she has in mind.  The same phrase, however, may trigger different specific or general mental associations in someone else and that can create a problem for the market researcher.

Since it is unlikely that the average respondent will take the time to learn to think like a survey writer, a key to improved survey data quality is for question writers to learn to “think like a respondent.”  This starts with understanding some fundamental principles of communication.  British philosopher of language Paul Grice proposed a principle of cooperation with three maxims to explain how we arrive at shared understanding in conversation.  The principle and its maxims are implicit—we do not think about them as we converse, but they constitute unwritten rules and expectation.

For the most part, we expect others to cooperate with us in arriving at a mutual understanding of what we are saying to each other.  In getting there, we rely on maxims regarding quantity (be as informative as needed, but not more so), quality (do not say what you know is false or you are unable to prove), and relation and manner (say what is relevant, and be clear).  Applying these maxims helps us resolve ambiguity and confusion. For example, if a survey asks how happy you are in your marriage or other significant relationship followed by a question asking how happy you are in general, you are more likely to give different ratings than if the order is reversed or the questions were asked at different times in different surveys. The reason is that you assume, in the first case, that the “overall” happiness question does not include your marriage because there was a separate question about that.

What does this mean for survey question writers? For one thing, respondents assume that if you put something in a question it must be important to their interpretation of the question (conversely, if something is “missing,” it must not be relevant).  For another, respondents expect that the question contains all the information they need to come up with an answer.

Over several studies conducted by Norbert Schwarz, Seymour Sudman, and Roger Tourangeau (and many others) a framework has emerged for understanding the cognitive origins of survey measurement error.  In addition to violations of the principle of cooperation, such errors arise from the interaction of question characteristics with specific cognitive processes.

Take the case of retrieving memories—definitely a key cognitive process in survey taking.  Survey questions often elicit autobiographical details and life history information (“How many times in the past year have you….?”) and other factual information. We now know that, except for a few “chronically accessible” memory objects such as your date of birth or telephone number, memories are by and large reconstructed rather than recalled.  Because, however, this reconstruction usually occurs outside of conscious awareness, we feel as though we’ve recalled the memory intact.  And so we ask survey respondents to provide factual accounts as if our memories operated like a jukebox or a digital computer—a “request” for information leads directly to a location in memory where that information is stored.

Cognitive psychologists have demonstrated that the memory reconstruction process is far from perfect.  Harvard University psychologist Daniel Schachter, for example, has identified what he calls the “seven sins of memory”—transience, absent-mindedness, blocking, misattribution, suggestibility, bias, and persistence.  Let’s take a closer look at transience (another name for “forgetting”).  Numerous memory experiments have shown that, at least for verbal learning tasks, we may forget as much as 50% of what we’ve learned within a few hours after the tasks were completed (but the rate of forgetting slows considerably after that).  Fortunately, we also know something about the factors that affect transience, and that knowledge can help us to write better survey questions.  For the most part, including appropriate retrieval cues in our survey questions will help respondents retrieve the information we want.  For example, multi-modal cues (such as visual images in addition to words) may enhance retrieval.  And we can avoid asking questions about events that are likely to be unmemorable, such as specific occurrences of frequently occurring behaviours or the details of casual conversations.

Learning to think like a respondent
The average busy market researcher has little time to devote to becoming an expert in the cognitive psychology of survey research.  Even so, a few simple steps may help us improve the effectiveness of our survey questionnaires (and, by extension, the ROI on MR).

First we can learn to look at each question we write with a four-step model of survey response in mind.  This model was proposed by Roger Tourangeau and colleagues (Tourangeau, Rips & Rasinski, 2000).  The four steps are comprehension, retrieval, judgment, and response matching.

Comprehension encompasses all the mental work of understanding the meaning of the question. Comprehension problems are most likely to arise when the survey writer mistakenly assumes that respondents share his understanding of the terms used in the questions.  Other problems stem from the use of vague concepts (“big government”) and vague quantifiers (“a few” and “often”).  Respondents always try to resolve ambiguity so that they can answer the question, but they may not resolve it in the way the survey writer intended.

Retrieval represents the process of searching our memories for mental representations that are relevant to the question. Once we have retrieved a set of representations, we need some way to evaluate and integrate them into our “answer” to the survey question—the judgment step. Finally, we must match our internally generated answer to the responses available in the survey instrument.

To see how this works, imagine that you have just been asked to rate your overall satisfaction with a recent purchase from Amazon.  Comprehension likely is not an issue, unless Amazon is not too precise in specifying what they mean by “recent.”  Because the invitation names the product you purchased, you naturally bring to mind thoughts about the product.  Whether you first remember something positive or something negative is likely to impact subsequent thoughts. After a few seconds you have a handful of memories and you mentally weigh them to decide whether you are, in general, satisfied or not.  You give more weight to those memories that are emotional (how you felt) and less to those that are simply factual.  In the end you conclude that you are “more satisfied than not” and you go back to the survey question.  Unfortunately, there is no option to say that you are “more satisfied than not.”  Instead you have to choose a number between 1 and 10 where one means “not at all satisfied” and ten means “completely satisfied.”

You might not be able to think this way as you write a question, but you can train yourself to think in these terms as you review the questions that you or other market researchers have written.

You can begin to “observe” respondents’ thinking processes first hand by conducting think aloud pre-tests (also known as “cognitive interviews”).  In a think aloud pre-test the respondent does exactly that—verbalising whatever he or she is thinking on the way to answering each survey question. Conduct several think aloud pretests and you definitely will start thinking more like a respondent as you craft survey questionnaires.

David G. Bakken, Ph.D. is Chief Insight Officer, KJT Group

 

.

2 comments

Bilal Khan October 21, 2013 at 12:18 pm

More than anything this post helps remind us of how important the instrument design actually is. Kudos to the author!

Reply
Ian Crawford October 18, 2013 at 11:41 am

An interesting and informative article. It has given me a great deal to research in preparation of future lectures on questionnaire design. The illustrations are engaging.

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles