By Kevin Gray
Many marketing researchers will confess that our industry has never been very adept at marketing itself, ironic though this may be. Another matter less talked about concerns the second word in our industry title – research. Marketing research, including newer methods, can be quite slipshod.
“But it doesn’t have to be perfect! XX% is good enough” is a protest I’ve heard when the quality of MR is brought into question. Due to budget and time constraints, researchers in many fields must sometimes accept a smaller sample and wider confidence intervals than they’d have preferred. In the context of marketing research, this objection is often nebulous, however. What does “XX%” mean? If we ask the wrong people the wrong questions, is that good enough? If we misinterpret Daniel Kahneman in ways that cause our client to make the wrong decision, is that good enough? If our statistical model suggests to our client that they need to fix A, B and C when the real problems are D and E, is that good enough?
After all these decades, how could marketing research have a quality problem? There is one answer that’s deceptively simple – research is hard. Let me backtrack for a moment. Nearly all the research methods we use in marketing research have been adapted from other fields, and marketing scientists are continually looking outside MR for new ideas. It’s part of our job. Though I’d prefer to spend more time with John le Carré, I read books and articles on healthcare, psychology, economics and various other subjects, in addition to statistics and computer science. I’m also in frequent touch with academics in several fields.
From all this it’s quite clear that research is hard to conduct competently. Theories can be difficult to formulate and test empirically. Even well-conducted research may fail to replicate, and I explain some of the reasons why in Meta-analysis and Marketing Research. Authorities on scientific topics frequently disagree and disputes can become quite rancorous. For example, it turns out that the dinosaurs might not have been killed off by an asteroid after all – see this article if you’re interested in competing theories.
Statistics – the grammar of science in Karl Pearson’s words – is not as clear cut as many may believe. Scientists also make statistical mistakes and I’ve summarized some of them here. Statistics is a very big field and getting bigger all the time. There are now countless ways to analyze the same data and statisticians’ choices can have important consequences for decision makers. Some established practices, such as significance testing, are also under fire. Moreover, there are usually several ways to interpret the same findings.
Data is another issue. Big data can make matters harder, not easier, for researchers. Generally speaking, the bigger the data, the more likely it contains errors and the harder these errors are to find and fix. The size of data also reduces our analytic options. What about AI? Amidst the hype it’s easy to forget that the brains of AI are computer programs created by fallible human beings who must make judgments about what to program and how to program it. In addition, the more complex the AI, the greater the risk it has bugs that will remain undetected. AI is not a silver bullet. Big data, machine learning and AI may make matters worse if they encourage us to hide behind technology.
Non-probability samples are typical in consumer surveys, though we often erroneously analyze the data as if they were a simple random sample from a known population. Fieldwork quality control can be erratic as well, and it’s not unusual for statisticians to spend considerable time cleaning data that’s supposedly clean.
Moreover, survey research is more difficult than many may realize, and even well-designed questionnaires cannot entirely eliminate error. Survey research results are easily misinterpreted, too. For instance, in a product test, if 15% say they would definitely buy Product A and 25% Product B, this only suggests that the second test product is probably the better candidate, not that 25% of consumers will buy it if launched.
I’ve just scratched the surface and my focus has been on quantitative research. Qualitative specialists could surely add to my comments. What to do? I think the first step is to admit that we have a problem and that research is hard. Organizations such as ESOMAR and the Market Research Society offer seminars on research methods, and many research agencies and clients provide or subsidize formal training for their employees. The Quant Essentials series I’ve published in RW Connect was a small contribution on my part.
We must do more, though. Let’s raise the bar.
Kevin Gray is President of Cannon Gray, a marketing science and analytics consultancy. He also co-hosts the audio podcast series MR Realities.
1 comment
I agree, Kevin. I would add that ‘hard’ research does not necessarily equate to ‘slow’ research, though the people who want research faster and faster seem to assume that research is easy. The key, I think, is forethought. Like you I read extensively, and therefore keep building up knowledge that I will be able to apply later. The other key is challenging the status quo. Just because everyone is else is using a particular method that doesn’t mean you have to.