Techniques

We’re Only Human

Abi Hill and Stephen Phillips

How can behavioural economics change the way we conduct research?

Behavioural economics argues that traditional economics is often unrepresentative of actual behaviour in its focus on a ‘rational’ behaviour which doesn’t actually exist. Traditional economics works by creating and analysing models or theories to build a macro view of the world. Then someone discovers the model to be flawed. So another model is created which, inconveniently, is eventually disproved through experiments or data.

Behavioural economics doesn’t assume that humans behave in rational ways. It looks at actual human behaviour and accepts our irrational choices. It understands that a part of human nature is to make mistakes, and that this allows us to recognise where we fail to perform and, therefore, where we can make improvements.

Researchers are often guilty of using unrepresentative models. Unlike economists, however, we don’t test our models. We take them as models, and then modify or reconstruct them based on that same presupposition. We then populate them via interviews, surveys and statistics, assuming that fixed models, based on ‘rational man’s’ behaviour, disrupt our results before we begin.

How to improve market research
Here are two intertwined elements of behavioural economics and some of their implications for research:

Investigating choice: The paradox of choice: contrary to the rational view that more choice leads to increased satisfaction, it actually leads to paralysis or buyer’s remorse. Choice architecture: the way in which information is presented also has an impact on decision-making.

Anchoring: Reference points (anchors) impact people’s subsequent decisions and opinions.

The classical notion of the homo-economicus feeds the expectation that, as choice increases, people react by evaluating each new option and making selections according to price, features and fit. As our choices increase, so should our satisfaction.

Behavioural economics, however, highlights the fact that, with more choice, humans tend to default to simple heuristics to reduce the complexity of decision-making. We ignore numerous options by focusing on a recommendation or choosing a brand leader because we are “cognitive misers.” We outsource decision-making using cues such as ‘what others do’ to make our choice easier.

We often see this behaviour in purchase journeys, especially for high-investment products such as TVs and cars. The purchase decision can go on for months without a decision being made. Consumers spend time researching features and functions that were never previously considered important, before defaulting to what’s on sale, or even deciding not to buy the selected car because a member of the staff ignored them. This also happens with smaller products, as in the case of buying what we have always bought in the cereal aisle.

It is also beneficial to look at choice architecture. That is to say, there is no neutral and unbiased way to present a choice, as people are influenced by the way options are presented to them. When the context changes – for instance the way a canteen is laid out, whether the salads or the sweets are within the closest reach – so can the decisions we make.

Many of these decisions happen on an unconscious level. Some are shortcuts we don’t even realise we’re making, so simply asking people why they did something won’t give us the real answer. Very often they don’t have this level of introspection.

Asking questions instead of exploring the consumer’s motivations often encourages respondents to default to easy, apparently rational answers such as ‘it was the cheapest.’ In fact, the decision-making process is more complex and opaque.

We can explore these unconscious decisions by using technologies such as eye-tracking that allow us to see what stimuli consumers absorb and noting how this impacts their final choice. There are other, often forgotten, techniques that researchers should use in addition to questioning. These include language analysis, facial micro-expression analysis and body-language cues.

Multiple questions and lengthy questionnaires don’t generate better answers, as responses will be framed by the way the question is asked, and respondents tend to switch off as their attention wanes. Rotating questions and ensuring they are as neutral and non-leading as possible will reduce this bias.

Anchoring
Anchoring is a pervasive and easily replicated element of human nature. The term refers to the effect of reference points which people use as anchors for subsequent decisions and opinions. It is most commonly seen when talking about numbers, especially pricing. People tend to use a relevant number that serves as a starting point, and then move away from it towards the correct answer.

For example, we wanted to find out how much people would pay for concert tickets for a charity event. As we were aware of the effect of anchoring in groups, we asked respondents to write down – rather than say – how much they’d pay. This resulted in a wide range of responses, from UK £15 – £150.  However, despite our instructions, a respondent in one group said aloud “I’d say about UK £50.” The other members of the group put their price within £10 of that amount.

The potential impact of anchoring on a pricing study is clear – no matter the order of the questions, responses to what is too little, too much and just right will always be anchored to a previous answer. This phenomenon can also impact surveys that aren’t specifically about pricing – the numbers we include or ask respondents to think about might impact their responses. How is the questionnaire ordered? Will the preceding question impact the next?

These effects also apply to discussion guides, which is why it is important to keep them as open as possible.  Free-flowing conversations allow respondents to use natural anchors, rather than hooking on to our imposed ones. We use a non-directed interviewing technique in which we ask respondents to tell their ‘stories’ of a particular event in their own words. This allows them to create the order of the discussion, and their narratives are anchored to natural points of reference that occur in reality, rather than in our presuppositions.

This technique also aids our understanding of what impacts consumers in terms of price points and brand perceptions. We can learn what the anchors are in the actual marketplace, and can gauge which criteria – price, service features, trust, etc. – consumers use as base points for comparisons.

The context in which we conduct research might impact our answers. A focus group conducted in a Hilton will generate different responses than one conducted in a community centre. Often hall tests come back with more positive results than online questionnaires, due to the different study environment. It’s also worth thinking about how the warm-up conversation (or lack thereof) sets a tone. Helping someone feel at ease can make him more positive; telling someone she is an expert can make her more critical.

We can’t ignore anchoring. Trying to exclude it from the research process isn’t possible or desirable, as we would then be measuring something artificial. We must both consider existing anchors, and be aware of the anchors – numerical, verbal or situational – we provide.

So what does this tell us?
After 60 years of marketing and research, we still have only puddles of wisdom instead of the deep pools that come from genuinely understanding the customer point of view. Research has to change. Researchers who are not afraid to put the consumer at the centre of the process can create fantastic research. Those who don’t change will continue the endless cycle of rediscovering the thoughts they themselves created.

So review your research methods, and remove your anchors and predefined constructs – anything that imposes your perspective and marginalises that of your respondents. Instead, find consumers’ anchors. Extend the range of customers you speak to, and get them researching their own understanding of the brand. It’s the only way you’ll really get to the heart of the matter.

Abi Hill is senior researcher and Stephen Phillips is chief happiness officer at Spring Research.

1 comment

HOSSENLOPP April 19, 2012 at 1:01 pm

Excellent article, that is what I think online surveys are not so often trustable, or at least must be completed by a deep and introspective qualitative research..

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles