Techniques

Context Hacks and Emotional Packs – Part 1

Tom Ewing

Research In The Behavioural Age
In the call for papers at this year’s ESOMAR Digital Dimensions, one particular topic really excited me: “survey research in a post-Kahneman world”. What’s encouraging is how the topic accepts that we do live in a post-Kahneman world. In other words, Daniel Kahneman’s ideas – that people take most of their decisions using their fast, intuitive “System 1” not their slow, considered “System 2” – are now understood and acknowledged by most market researchers. They have become mainstream.

That’s great news, if so. It means we can finally stop telling people what Kahneman and his fellow behavioural economists say, and start showing what we actually do about it all. So in this piece I’m going to mostly skip the explanations, and get straight to the implications and solutions.

Behavioural economics asks two huge questions of researchers. First, how do people make “System 1” decisions? And second, how do we adjust our methods to take that into account?

Is Neuroscience The Answer?
Some people think there’s an easy answer to both of these: neuroscience. If you directly measure brain activity you’ll be able to pick up subconscious decisions, and can stop asking explicit questions. Roughly the same thinking lies behind the current interest in bio-research – facial recognition and coding, galvanic skin response, and so on.

The problem with neuroscience as a research method lies in what you might call “science envy” – an implicit belief that the job of market research is to be scientifically accurate. It isn’t. The job of market research is to help businesses change behaviour. To change behaviour you need to understand it – something neuroscience can really help with – but you also need to measure it, so you know if the change is working. You can either measure behaviour directly or you can find the most predictive, practical and cost-effective proxy for it. That might be measuring brain activity… but usually it isn’t.

In other words, academic neuroscience is vital for helping us understand what happens in the brain when a decision is made, or an emotion is felt. But that doesn’t mean commercial neuroscience is our best bet for measuring the behaviour those decisions and emotions lead to.

Where does it fall down? Aside from the cost and practicality, it’s a question of context. Behavioural economics has demonstrated the huge range of influences on everyday decision making – including the powerful pull of the environmental and social context on people’s decisions.

For instance, at BrainJuicer we recently did some work for an alcohol awareness charity, where our job was to help combat binge drinking in bars by encouraging water consumption. Water was freely available, but not many people were drinking it. We changed the context in two ways. We trialled putting free bottles of drinking water in prominent places in real bars, and we also put up posters showing someone raising a pint of water to their lips (see figure 1). Together, these low-cost interventions raised water consumption by over 300%.

Figure 1: The poster we displayed in bars – showing behaviour “in action” increases the likelihood of change.
Figure 1: The poster we displayed in bars – showing behaviour “in action” increases the likelihood of change.

Neuroscience definitely inspired those experiments – the poster was based on the theory of mirror neurons, which says that when people see others performing a familiar action, their brain behaves as if they themselves are doing it. But neuromarketing techniques couldn’t have predicted or measured the actual behaviour change created when we altered the context in the bars.

Context Hacks
Context is vital for behaviour change. And what we were doing with context in our drinking water experiments was a kind of ‘context hacking’. The idea of hacking originally comes from the tech world, but as technology and society become more and more entwined, the metaphors of technologists take on a new and wider meaning. So now it’s common to talk about life hacks, productivity hacks, diet hacks and – in our case – context hacks.

So what is a hack? As Steven Levy put it in his 1984 book Hackers, hackers believe “that essential lessons can be learned about the systems—about the world—from taking things apart, seeing how they work, and using this knowledge to create new and more interesting things.” In other words, hacking is an experimental ethos based on understanding and discovery of how systems work.

This is also a very good description of how researchers apply behavioural economics lessons practically. We identify the desired behaviour. We understand what’s promoting or holding back that behaviour. We work out practical ways to remove those barriers and promote the behaviour we want. And we test them.

This is why actual behaviour is always the gold standard variable when you’re creating context hacks. The preferred method for researchers looking to change behaviour should be to measure it directly before and after you change the context. Without direct measurement, it’s much harder to know whether or not your ideas will work.

Except sometimes you simply can’t measure behaviour directly. A lot of research is done on things that don’t yet exist in the world – new concepts, pack designs, adverts or promotional strategies, for instance. You can’t measure behaviour around those directly. So what do you do instead?

Game On
At ESOMAR’s 3D Digital Dimensions conference in Miami in 2011, BrainJuicer’s Peter Harrison put forward a solution to this. He had been looking at the then-hot topic of “gamification” – inspiring research respondents by turning surveys into games.

What interested Peter wasn’t so much games themselves as the psychological “hot states” into which they put their players. People playing some games were highly emotional – their decisions were often impulsive, fast, and highly dependent on other people. In other games, people might become cold and calculating, concentrating hard and making slow decisions.

In other words, games were superb vehicles for inducing particular modes of decision making – Kahneman’s System 1 and System 2. Games were psychological context hacks – changing the frame of a decision and pushing people towards faster or slower thinking.

What this insight offered was a solution to one of the huge problems of applying behavioural science to market research. After all, typical surveys were also excellent vehicles for inducing slow, considered, System 2 decision-making. Every survey was quite explicit about it! “Now we’d like you to think about…” they would say, or “Now we’d like you to consider….”.

But if most decisions are made using System 1, this was exactly the wrong approach for surveys to be taking. If they weren’t getting at the real drivers of behaviour, they wouldn’t be getting at real behaviour.

Games and game mechanics, though, offered a way of hacking the survey context and pushing participants towards System 1 thinking. Using game mechanics, we could simulate decisions better in situations where real behaviour wasn’t available.

The Power Of Packaging
The area we decided to focus on to try this out was pack testing. It’s almost impossible to overstate how important packaging is in consumer decisions. It’s the final touchpoint for a brand – the version of the brand people actually see, touch and feel at the very moment they’re making a decision. It’s the last chance to influence the consumer.

Given this last chance, some brands take the opportunity to bombard their potential customer with information. Specifications, benefits, nutritional information, new flavours and formulas… the informational clutter on consumer goods packaging can be enormous. And away from the FMCG world, the problem is even worse.



Packaging like this is – consciously or not – making an assumption about how people decide. Info-heavy packs assume that people use the highly considered System 2 to make decisions – that they can be swayed by offering new information for them to weigh up. But marketers and behavioural economists alike understand that this isn’t the case. For most categories, the decision time in a store is a few seconds at most.

Other brands go in a different direction, using clear and recognisable pack designs,  with attractive pictures to trigger an emotional reaction in customers. This kind of packaging is making its appeal to System 1 – our quick decision-making brain, guided by emotions and rules of thumb. Since System 1 is the dominant force in most of our decisions, we’d expect that kind of packaging to perform well.

But research doesn’t always see it that way. The problem, as mentioned earlier, is that surveys themselves push people into highly considered System 2 responses. Standard pack testing methods are no exception. By giving people time to examine packs before asking them for their reactions and choices, surveys put too much emphasis on detailed consideration and overweight the importance of on-pack information.

How do we know this? Because it comes out in the results – and makes them go wrong. In 2011, an FMCG client approached us with a thorny problem. Their brand was being beaten in stores by a competitor, and they strongly felt packaging was part of the problem. But every time they ran a standard pack test, they beat their competitor.

They wanted a more trustworthy pack testing solution, one that better reflected the existing market conditions, where their competitors were winning.

We looked at their packs and their competitors packs, and worked out where the problem might be. The client’s packs were high on information, but their competitors used bigger, bolder images and were more visually appealing. If real customers were using System 1 to make decisions, and survey takers were being nudged to System 2, maybe that explained why respondents preferred the info-heavy pack while real people picked the more visual, emotional one.

How to test that out? By hacking the survey’s context, and pushing it more towards System 1.

TO BE CONTINUED…

In part 2 of this article, you’ll find out how we hacked the context of pack testing to get more accurate results – and how the same thinking can be applied to other parts of shopper marketing…

Tom Ewing is Digital Culture Officer at Brainjuicer

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles