After a busy morning, you deserve a break. You log into the Guardian Soulmates dating site to check your messages. Just the two. Neither seems eye-catching so you don’t reply.
You’re not getting as many enquiries as you expected. Perhaps you should update your profile? The first thing you change is your photo to a more flattering one, although it is a few years old.
What next? The problem is that everyone else exaggerates, so anyone looking at your profile will assume you do too. Perhaps, by being scrupulously honest you’re not conveying the truth.
You tweak your personal information by adding a couple of extra inches onto your height. Maybe that’s overkill – you readjust your height to add just one inch. Then you change your job title. You’re already doing the work of a director, probably best to call yourself one.
You’re not alone, by any means, in stretching the truth to breaking point on your dating profile. Christian Rudder, founder of dating site OK Cupid, analysed the profiles of 1.51 million active users and uncovered evidence of systematic lying.
Men on the site are four times more likely than others of the same age and postcode to claim they earn more than $100,000. Suspicious, but not as fishy as the fact that the average height of users was supposedly two inches taller than the population average.
Most innovatively, Rudder examined the age of uploaded profile photos. When digital cameras take photos they attach text tags, called EXIF metadata, to the jpeg. These tags capture the date and time. Rudder found that while the average photo was 92 days old, the photos rated as “hottest” were much older.
Sex, lies and survey data
If Rudder’s study hinted at lying, the National Survey of Sexual Attitudes and Lifestyle (NATSAL) categorically confirms it. The survey, conducted among 15,000 respondents by UCL and the London School of Hygiene and Tropical Medicine, is the gold standard of research. In 2010 it found that British heterosexual women admit to a mean of eight sexual partners, compared to twelve for men. The difference is logically impossible. If everyone is telling the truth the mean for each gender must be the same.
All of this goes to show that advertisers trying to understand their customers have a problem: if they listen uncritically to consumers, they’ll be misled.
Even honest answers can be misleading
To complicate matters, people often don’t know their genuine motivations. This is demonstrated by an experiment led by Adrian North, then a psychologist at Leicester University.
To complicate matters, people often don’t know their genuine motivations.
Over a fortnight he alternated the background music played in a supermarket wine aisle, between traditional German oompah music and French accordion music. He surveyed customers who had bought either French or German wine. When accordion music was played, French wine accounted for 77% of wine sales; when the soundtrack was oompah music, German wine represented 73% of sales.
The scale of the variation shows that music was the prime determinant of the type of wine bought. However, only 2% of buyers spontaneously attributed their choice to the music. Even when prompted, 86% of people stated that it had no impact at all.
It’s not that they were lying; more that they were unaware of their motivations. The reasons proffered were mere post-rationalisations or, in the psychological term, confabulations. In the memorable words of Jonathan Haidt, author of The Righteous Mind, the rational mind “thinks of itself as the Oval Office when actually it’s the press office”.
How to apply this effect
1. From lies, learnings
Lies can be illuminating.
Take the NATSAL study that investigated how many sexual partners men and women admitted to. The claims are untrue but they reveal gender expectations about promiscuity. Men exaggerate their promiscuity, while women downplay it. The changing ratio between men and women tells a tale too. In 1990 men claimed to have two and a half times more sexual partners than women, by 2010 the gap had dropped to 50%. Gender expectations are levelling out.
Data are not transparent. They need to be teased, analysed and probed. Taking them at face value will mislead you. But if you dig further, then insights can be uncovered.
2. Adapt your surveys
Louis Heren, foreign correspondent at The Times famously advised, “When a politician tells you something in confidence, always ask yourself, “Why is this lying bastard lying to me?” His scepticism ensured he probed and pushed politicians until he uncovered the truth.
When a politician tells you something in confidence, always ask yourself, “Why is this lying bastard lying to me?
Similarly, you need to be prepared for deceit, and design surveys accordingly.
There are a couple of useful techniques. First, consider asking respondents how they think others might behave.
I used this approach when investigating whether people feel dissatisfied by the idealised images projected on social media. Of the 300 consumers I surveyed, 26% said they had pretended to be happier, or more successful, than they actually were. Furthermore, just over a third claimed to have felt unhappy when they saw others’ success on social media.
While this was a sizeable proportion of the sample I had a hunch that it was an under-estimate. After all, there’s a pressure to present a positive image in surveys. Psychologists call this the “social desirability bias”.
With that in mind, I asked another two questions. Did they believe other people crafted an overly positive social media image? And did other people feel sad when they saw these idealised images?
In this variant of the questions, people were far more likely to admit “social airbrushing” happened. In fact, 60% claimed that their friends portray themselves as happier than they actually were on social media. And nearly two-thirds agreed that other people sometimes feel sad when they see their friends’ success on social media.
My belief is that these questions encouraged respondents to answer more honestly and therefore the results are more accurate.
3. Don’t ask, observe
Direct questioning is unsatisfactory because of lying and confabulation. A more accurate alternative is to observe behaviour.
This could still involve surveys. The twist is to mask the objective of the question from the participant by adopting a cell methodology. This technique involves randomly allocating your sample into different cells, or groups, and then asking each group a slight variant on the question.
Even better, avoid surveys and monitor behaviour in a realistic situation. One example of this was on a New Look brief, when they were planning to launch a menswear range. The initial plans were for a modest budget to make a simple announcement.
I suspected a small campaign would be insufficient to overcome men’s reluctance to buy clothes from what was perceived as a women’s clothes shop. However, that was a hunch and we had no budget to fund a survey.
As an alternate methodology Dylan Griffiths and I recruited half a dozen agency volunteers and photographed them twice: first holding a New Look plastic bag emblazoned with their logo, then one while holding a Topman bag. We uploaded the images to Badoo, a dating site where people rate the looks of other users’ photos. The pictures were left up on the site for a fortnight while we waited for them to be rated.
We found that when our volunteers were holding a New Look bag they were seen as 20%-25% less good-looking than when they were clutching the Topman bag. This demonstrated that the brand had a more significant job than they had initially suspected and that they needed to make bigger efforts to persuade men that they were a unisex brand.
The final approach is to use found data. That is the data that consumers unwittingly create when they’re going about their daily tasks. The data is particularly useful as it’s not muddied by the social desirability bias. People aren’t aware they’re being monitored so they behave naturally.
Search is the most accessible found data source. Analysing search data provides insights that consumers might be loath to admit in a survey. Consider sexism. Most people would claim that they’re equally interested in their children’s intelligence, regardless of gender. However, Seth Stephens-Davidowitz, the New York Times journalist and data scientist, has analysed US search data and found that parents are two and half times more likely to google “Is my son gifted?” than “Is my daughter gifted?” Google acts as a modern confessional in which all our darkest thoughts are captured.
However, this rich seam of data is too rarely mined by advertisers. One of my favourite freely available, but untapped, search tools is answerthepublic.co.uk. This looks at the most common search strings that include the term you give it and a question word, such as who, what, how or when. It is a very simple but quick way to understand what consumers genuinely think about your category.
For example, if you input the term ‘vitamin’ you find that consumers rarely search for vitamins by their letter symbol. Instead they search for vitamins by what they do, such as helping with muscle growth or shiny hair. That’s a useful insight for a vitamin brand as it suggests they should label and package their vitamins according to the problem they solve, not the particular vitamin they contain.
4. Observed data is not perfect
Observed data are a significant improvement on surveys. However, they are far from perfect and still need to be interpreted with caution.
Consider social media data. Brands regularly analyse their Facebook fan data to understand their customer profile. But this data does not always accurately reflect reality. An example from Stephens- Davidowitz, illustrates this discrepancy. He looked at the gender of Katy Perry Facebook fans and found that they were overwhelmingly female. However, Spotify listening data revealed the gender split was more balanced: Perry was in the top ten artists for both genders. If the music label used the Facebook data to target their advertising they’d be way out.
Does that mean the new data streams are junk and best ignored?
Not at all. Observed data is an improvement on claimed data, but it’s still flawed. To understand customers we need a balanced approach, using multiple techniques. If each technique tells us the same story then we can give it greater credence. If they jar, then we need to generate a hypothesis to explain the contradiction.
To understand customers we need a balanced approach, using multiple techniques.
Let’s go back to the Katy Perry example. A simple explanation would be that while both genders enjoy listening to her, far more women are comfortable expressing that publicly. If a record label wants to sell Katy Perry songs or encourage streaming then then Spotify data would be ideal. However, if they want to promote her concerts better to use the Facebook numbers. Neither data set is right in any absolutist sense – they are right in certain circumstances.
This is an excerpt from Choice Factory: 25 behavioural biases that influence what we buy by Richard Shotton