By Kevin Gray and Koen Pauwels
“In spite of our strong marketing support, sales of our brand are flagging. Why? What should we do?”
“If we launch this new product, what will it do to our bottom line? Will we just cannibalize our flagship brand?”
These are just two examples of questions marketers around the world ask themselves every day.
Unfortunately, there are rarely simple answers and organizational politics and other factors, such as the state of the economy, also come into play, further complicating matters. While some marketing researchers seem to take for granted that marketing is now well-embedded in most companies and that the value of marketing research is universally accepted by marketers themselves, even in Western multinationals these assumptions are tenuous. In the words of one of our contacts, a marketer with extensive brand management experience at MNCs, “Marketing is regarded as fluff” even at many large corporations. The perception that the real work is done by production, sales and engineering is very common.
On the whole managers, marketers included, seem unprepared about how to fully leverage either data or analytics in decision-making.1 Many decisions continue to be made based on gut instinct and internal politics, even when sophisticated analytics and Big Data are part of the decision-making process. Though not wishing to resurrect Taylorism2, we feel decisions can be made more scientifically and more effectively through the appropriate use of data and analytics and, more fundamentally, by thinking like a scientist.
Thinking like a scientist isn’t just matrix algebra and programming – these are important tools for some participants in the decision-making process but are means and not ends. Thinking like a scientist is a way of looking at the world that helps us tie disparate data and information together to make better decisions in a timely fashion. One not need not have elaborate statistical skills in order to think scientifically – most scientists have actually had minimal academic coursework in statistics.3
The first steps are to examine our assumptions and, in a nutshell, to do our homework. Here are a few basic questions we would encourage decision makers to ask themselves:
- What decisions do we really have to make? Why do we think these are the decisions we must make?
- How much of what we “know” about our product category is actually mere guesswork? What do we really know about the competition? Is the competition really the competition? Perhaps our definitions of our category (and core consumers) are too narrow.
- Is it time to revisit our SWOT analyses (Strengths/Weaknesses/Opportunities/Threats)?
- What relevant data do we have and how reliable is it? What data can we obtain that might fill in important blanks?
- When do we really have to make our decision? A decision made too slowly is a bad decision, but a bad decision made hastily is not a good one either.
Thinking like a scientist can help us better judge whether a decision will have the desired consequences and can also bring to light choices that we had not considered.
Dashboards under assorted names are now a dime-a-dozen but the utility of many of them is uncertain. KPIs are religiously tracked but many may have no empirical relationship with the bottom line. They are assumed to be connected with sales, market share and profitability, for example, but this assumption might never have been rigorously tested, and some KPIs may only be legacy items with no real business meaning. Chapter 8 of ‘It’s not the size of the data – it’s how you use it’ explains how to connect KPIs to your bottom line, and to drop most so-called KPIs because they are not leading indicators of hard performance.
There are many traps that managers can easily fall prey to when trying to unravel the mysteries of the marketplace:
- Presumed causes may, in fact, be effects. For instance, we may observe a huge spike in our paid search clicks together with a spike in online sales and fully attribute the sales increase to the success of our paid search. However, these customers may have already decided to buy from us thanks to other incentives, and simply use search as a lazy way to get to our site. We usually do not have experimental evidence on which to base our decisions and even experiments are never 100% conclusive.
- There may be important variables we haven’t considered. According to London Business School professor Tim Ambler, there should be a KPI for every likely cause of success or failure. Moreover it is important to cover the main stakeholders.
- We frequently cross tab or plot variables two at a time but this does not account for factors that might mediate or moderate their relationship, which may really be weaker or stronger than the tabulation or graph suggests. The classic – macabre – example is that psychiatrist visits increase people’s suicide risk – this relation holds up, but switches from positive to negative once the third variable (depression) is accounted for.
- In time-series data there are often lagged relationships among variables and one that might seem irrelevant may actually have a long-term impact on sales or some other key measure. This effect could be large or small, beneficial or harmful. For instance, increases in brand consideration and liking often lead to long-term brand benefits, even after the competition reacts.
- There may be an genuine relationship between a marketing input and sales but we may not spot it because the relationship between the two is non-linear or obscured by other variables that we have not measured or modelled.
- We may be confusing a fluke with a trend. The more we seek, the more we will find. We should look at the overall patterns in the data and not just focus on one or two variables.
- Last but not least, be wary of confirmation bias – it’s quite natural to search for, interpret or recall information in a way that confirms our beliefs!
Unless the variables that are truly relevant are statistically identified and tied together, dashboards and other decision support tools may be misleading or at best a waste of money. In extreme cases, we would be better off tracking random numbers generated by a spreadsheet; this certainly would be faster and incur little cost!4 (It is important to recognize, though, that statistical models are simplified representations of reality, not actual reality, and that math can never entirely replace the gut in management decisions.)
Humans are strongly inclined to think dichotomously (e.g., something is either good or bad) even though thinking in terms of conditional probabilities is usually a better reflection of the way the world works.5 We should also be frank and admit that data and analytics are often used to support decisions that have already been made and that we especially love numbers match our view of the world! Furthermore, it’s often quite easy to put forth a seemly good “explanation” about why something has happened after the fact but being able to actually predict future events is another matter all together.
In summary…
Data and analytics have been hyped to the point where many of us are getting sick and tired of hearing about them, and there is also a lot of disagreement about what they mean. In reality, we feel they are still greatly underutilized by managers. This is very unfortunate since we are now at a point in time in which many organizations now have more data and better analytic tools than ever to enhance decision making. However, we should stress that it’s the thought process that’s most important and, by following some of the guidelines we’ve outlined, managers can make better decisions even with limited data and mathematical tools. It’s truly the thought that counts.
_____________________________________________________________________________________
Notes
1 In the past many marketing research agencies were mainly field and tab companies, often with an Operations department headed by a “Programmer/Statistician.” This person was in charge of fieldwork and data tabulations. Perhaps because of this, marketing researchers to this day often think of analytics as cross tabs or programming. It’s also conflated with ‘Big Data’.
2 https://en.wikipedia.org/wiki/Scientific_management
3 See Statistics Done Wrong (Reinhart), for example.
4 Structural Equation Modelling and Time Series Analysis, while highly technical, offer very useful conceptual frameworks for thinking about these issues. The Halo Effect (Rosenzweig), The Improbability Principle (Hand), Risk Assessment and Decision Analysis (Fenton and Neil) and It’s Not The Size Of The Data – It’s How You Use It (Pauwels) are four books that also address these concerns.
5 Instead of “Will this work?”, for instance, “If we assume A, B and C, what is the likelihood of D?” may be the more useful question in many circumstances.
Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy. Koen Pauwels is professor of marketing at Ozyegin University, Istanbul.
1 comment
Anna,Terrific post. I would only quibble with one point. I believe the auho&rt#8217;s obligation is not to entertain, but to tell a great story. Related to this is the absolute requirement for a writer not to break the fictive dream. The “are you freaking kidding me” moment for me was the scene about 30 pages from the end of The Lovely Bones, which ruined what was an otherwise stellar book for me. I was willing to suspend my disbelief a lot for that premise, but that was too much for me. Thanks again, Anna.Like? 0