In the September issue of Research World an article was published by Lawrence Spero on ‘System 1’ decision making and how online surveys can utilise this instinctive response for better data. It caused a stir with some of our readers. We’ve now published the article online to provide those readers with a space to discuss the piece.
Lawrence Spero
“Is George Clooney handsome?” “Which of the following video games do you intend on buying in the next month?” “Is Beyoncé gorgeous?”
The very best answers to these types of questions come as a result of instant mental reflexes without delayed ‘System 2’ contemplation (Kahneman, 2011). This is a matter of scientific fact based on the biochemistry of the brain.
There is much in life that is automatic, like breathing. Making rapid-fire decisions can be automatic when responding to questions that have instant ‘tug,’ or emotional appeal. Such reflexes of the mind occur all the time and control many actions that people pursue.
There are many daily questions that require little or no thought based on what the Nobel laureate Daniel Kahneman calls ‘System 1’ thinking (Kahneman, 2011). The best answers to these decisions, of the sort described above, can deliver insights of tremendous value to global executives and market researchers – if the answers take seriously the biochemistry of dopamine activity.
Unfortunately, the science of dopamine response – which implies that the brain’s focus on survey questions is limited in duration and focus – is missing in the majority of online market research today.
A ‘nano-survey’ is a very short survey (3-7 questions) that draws on how the brain responds to the type of ‘tug’ questions referred to earlier: “Are you hungry?” “Do you have enough food for the week?” “Are you short of money each month to buy food?” These tug questions are readily answered anywhere across the web-enabled world – especially if they are displayed on a full-page white screen (a full-screen URL, not a pop-up); they provide an instant snapshot of issues immediately relevant in the locale of the respondent. The approach can apply ‘low latency’ (ie, fast exposure) to ensure that anyone in the world with access to a web-enabled device has a relatively equal chance of being exposed to the nano-survey. The ‘unintentional’ or abrupt nature of nano-surveys, plus the brevity of nano-survey questions, yields more reflexive ‘System 1’ responses.
Perhaps most fascinating – and differentiating from traditional market research online using stratified panel respondents – aspect is that this method taps into the real-time relative relevancy of these questions to the human brain. If a higher fraction of everyday internet users in, say, the United Kingdom, expresses greater interest in a nano-survey about perceived security concerns resulting from dramatic ‘long-tail’ events (ie, the response rate is significantly higher following a serious flu outbreak or a terrorist strike) one week vs. the next, then public health or security personnel should pay heed. Relevancy becomes powerful when you have established norms associated with response rates and you see a significant change; this is an indicator of passion in the subject. So, if video game option B is the preferred choice in an A/B test, and you can add to the insight that one sees a huge jump in response rates of males versus the male norm, then you are adding a ‘relevancy of topic’ score to your data (ie, more males were passionate about the topic given the sampling time frame).
If this vision can be achieved, respondents answer based on the well-documented theory of leverage salience (Groves et al., 2000). The more people feel interested in or connected to a topic, the more they are likely to engage. This is, in my view, the purest reason for engagement with or answering any kind of survey, since respondents are not motivated by incentives but by the issue itself. The power and data purity of this methodology online rises dramatically if, unlike on polls on news-content sites such as CNN, respondents cannot bias the results by mounting e-mail or social media campaigns and thereby directing friends to the polling site, because they will not know what website to direct them to. That is, in an ideal formulation, the web sites upon which the surveys sit – reaching the potential respondent after he or she types in a non-trademarked ‘nonsense’ domain on the URL, or direct navigation bar, on any web-enabled device – are always changing. Under this approach, the respondent base is purely randomised and not arbitrarily split by age, sex or other researcher-defined cells.
My own experience in educating doctors supports the view that the shorter survey is often better than the longer, delayed contemplation. For example, I once had 200 medical students rapidly answer all questions to an exam, collected the answers and then had the students spend as much time as they wanted on the same exam questions, revising their answers accordingly. All students did better the first time. This concept also resonates with writers labouring over book publication who slowly learn to ‘trust first drafts.’
Are the short, snappy answers to these nano-surveys meaningful? Yes, because the brain reflexively comes up with the right answer. Reflex answers – what the writer Malcolm Gladwell (2005) has popularized as the ‘blink’ – can be more informative than long, even contemplative, responses where contradictory thoughts inevitably intervene. The brain operates on reflexes. It is largely a conglomerate of reflex neural assemblies that work rapidly and automatically, but are slowly influenced by thoughts and events. For example, the ability to walk depends on many nerve-muscle factors but also depends on the amount of brain dopamine, which is lost in Parkinson’s disease where the patients walk and think slowly (Hornykiewicz, 2008).
Just as important, people are impatient in the present era of the internet. Answering a survey via a banner ad has become increasingly rare and will perhaps even expire altogether, if for no other reason than ‘banner blindness,’ the tendency of the eye to ignore the tops or sides of web pages where survey icons may sit. A recent study by Infolinks found that today, 86% of web users experience ‘banner blindness’ (Adotas, 2013). A web page that takes more than two seconds to render on the screen is typically skipped (Heitzman, 2011). (This is, naturally, affected by the nature of the internet connection and activity, which is improving readily around the world.) Speed, or low latency, is the ‘new normal’ for capturing the best data that derives from online questions, answers and information flow. Although a typical reaction time for most people is about one-fifth of a second (200 milliseconds), neural cell to neural cell transmission in the brain is intensely fast, with only half a millisecond delay between neurons (Katz and Miledi, 1965; Soviet Encyclopedia, 1970-1979). In the case of a startle reflex, the time taken for a sound to go from the ear to the brain is about 10 milliseconds (Davis et al., 1982). The blink reflex from the instant something touches the cornea of the eye to the moment of the blink is about 100 milliseconds.
Today, low latency is critical in computing network efficiency – and in survey response – particularly where real time (ie, nearly instantaneous) response is required for clients in macro-finance, global corporations or international security. Hence the enormous potential of nano-survey responses from every web-enabled region of the world.
Lawrence Spero is professor of pharmacology (retired) from the University of Toronto and former director of both Educational Computing and the Bell University Health Communications Lab. He is an unpaid advisor to the RIWI Corporation.
2 comments
Mr. Poynter may be surprised to learn that I agree with most of his points and am thrilled he raised them. Yet in some cases his points require nuance. Let me preface that the groundbreaking work of Kahneman and Tversky is about cognitive heuristics, and not, as he rightly observes, about an artificial distinction between ‘System 1’ and ‘System 2’ thinking.
The latter divide, as Prof. Daniel Kahneman points out in every public lecture, and in his recent book — a summation and tribute to the genius of his dear friend and collaborator, the late Amos Tversky — is simply a way to help readers navigate the complex work of cognitive heuristics for which Tversky was celebrated. An example of such an heuristic is availability bias. Another bias is what is often referred to as ‘herd bias’, common in finance, which is the tendency to think as everyone else thinks. What is important is to be as aware as possible of one’s own biases.
Having said this, I will briefly address each of Mr. Poynter’s important observations. Which questions constitute System 1 vs. System 2 is subjective; there has been, to the best of my knowledge, no case-controlled randomized research on this subject. RIWI’s clients define this since they know more about their clients’ insights on this matter than we do. As such, Mr. Poynter’s difference of opinion over what constitutes a System 1 vs. 2 decision, is, I would suggest, a difference of opinion. Admitting to the Internet nature of our data is well-documented in my own peer-reviewed research. It also resides on our website under the “methodology” tab. Although he is correct that this could well have been acknowledged in the article, one practical challenge is that it would take a longer article to dig into details on this matter; the overwhelming bias of the Internet is English, as well as the skew toward the younger demographic, notably the 18-34 year old population. These skews can be redressed to the best of our ability, and we do so with our clients.
With respect to access to norms, in some cases we offer these at the client’s request; in others, we are building them (subject to the limitations to which we know and readily admit); in others, they will be made public, as we are shortly going to do in conjunction with Greenbook, in a major public release on hundreds of thousands of survey-takers around the world, releasing, for example, norms of Internet browser usage, operating system usage and aspects of the survey-taking experience in over 200 countries and regions of the world. Our respondent data norms (for example, in data captured in conjunction with the World Bank in Indonesia and Kenya on the public’s desire for open financial data from government) have been shared with the world and are readily accessible on our website.
I will not comment too much on Kahneman’s nuanced interpretation of the populist writer Malcolm Gladwell; my own experience, having heard Prof. Kahneman respond to a question on this subject, was, in fact, laudatory of the writer. Yet with this point, too, I agree: Gladwell’s popularization of research does, at times, tend to avoid the nuance over what is, and is not, intuition. Most important, thanks to Mr. Poynter for acknowledging RIWI as being in the data innovation business. We are a data company, and we are considered authorities about rarefied parts of the Internet, on Internet software, and our objective is to provide our clients and research partners with a new and unique source of global respondents to answer the questions they deem important. Amos Tversky RIP.
– Neil Seeman, Founder and CEO, The RIWI Corporation
Whilst I find the RIWI system for nano surveys very interesting, I was somewhat taken aback by this article. Partly because it seemed such a naked sales pitch for RIWI and their innovative business model, but mostly because the article seems full, IMHO, of mistakes, over-simplifications, and a general lack of understanding about the market research process.
Here are a few of my key issues.
1. At the top of the article it describes “Which of the following video games do you intend on buying in the next month?” as a system 1 decision, i.e. reflexive. However, as anybody who has researched video games knows, this is a field where people tend to read the reviews and the boxes, attempting to decide whether the cost of buying the game will be balanced by enough hours of playing. Kahneman describes buying a white good as a system 2 activity, by that reckoning, buying a video game is going to be system 2 for many people.
2. The formulation “…buying in the next month” is also very odd for an attempt to get an instant, reflex, or system 1 response. Asking people to envisage the future is not something that can readily be done with a blunt survey question. This problem occurs in other suggested questions such as “Are you short of money each month to buy food?” – indeed as well as requiring reflexion, this question is flawed in that it is double-barrelled – no might mean I have plenty of money, or it might mean I sometimes don’t have enough money for food, or I have enough money for food, but I can’t pay my health bills.
3. The article describes system 1 and system 2 as scientific fact based on the biochemistry of the brain. System 1 and System 2 are metaphors, the people who developed them have changed their mind and Kahneman says at the end of his book “This book has described the workings of the mind as an uneasy interaction between two fictitious characters: the automatic System 1 and the effortful System 2. You are now quite familiar with the personalities of the two systems and able to anticipate how they might respond in different situations. And of course you also remember that the two systems do not really exist in the brain or anywhere else. ”
4. The author says that with RIWI all internet users have a relatively equal chance of being asked the survey. As I understand it, the sample will be skewed to people who use the internet more, amongst them to people who type the URL, and amongst those to people who make more mistakes. This skew may not matter, but it should at least be noted.
5. The article suggests that the response rate for a survey will increase and decrease as public interest increases and decreases, talking about relevancy. This may be true, but is it a good thing and how will the norms the author talks about be created and who will have access to them?
6. The author may well be correct when he recalls each one of his 200 students, in one course, on one occasion, doing better on their first quick setting of his paper than on their second and more considered setting, but what is the relevance to completing a market research survey, the process of passing a medical exam is different to that of completing a survey.
7. The author gives the impression that Gladwell’s book Blink stresses the benefits of quick answers. It does not, it shows how in some situations, usually for experts, thin slicing works in surprising ways, but he also shows that first responses are very often wrong. Kahneman is critical of Gladwell’s Blink proposition, writing “Malcolm Gladwell definitely created in the public arenas the impression that intuition is magical… That belief is false.”
I do not want this comment to come across as negative about innovation, or about RIWI. We need innovation and I think RIWI and nano surveys are well worth investigating. But in times of change we need to protect the basis on which market research makes its claims. I think this article is weak and I surprised that ESOMAR should have published it in the magazine. However, I do thank ESOMAR for posting in a location where others can assess it and add their comments. I look forward to hearing what others have to say.