Techniques

The time to re-evaluate Bayes?

Colin Strong

We are often so used to the tools that we use, we forget they reflect and shape the way in which we understand the world. 

One of our earliest tools, the map, originally simply reflected what humans could see in front of them, in no particularly accurate manner. Over time, with an understanding of the proportions of a space, including the space outside that which is immediately visible, maps became more realistic. As Nicholas Carr puts it in his excellent book The Shallows, “The more frequently and intensively people used maps, the more their minds came to understand reality in the maps’ terms.” He argues that they “advanced the evolution of abstract thinking” in man, as he was then able to “understand the unseen forces that shape his surroundings and his existence.”

Carr makes a similar case for the mechanical clock which again changed the way we saw ourselves.  The clock defined time in terms of units of equal duration, so we were able to start comprehending the concepts of division and measurement.  So we began to see, in the world around us, how the whole is composed of individual pieces which are in turn composed of pieces.  So we began to understand that there are abstract patterns behind the visible appearance of the material world.  And this mindset effectively propelled us out of the middle ages, into the Renaissance and then the Enlightenment.

Frequentist vs prior priority
Nate Silver, particularly in his book The Signal and the Noise, has recently been making a very persuasive case that the dominant school of statistics that we use has shaped the way in which we think about the world. He considers that “frequentist” statistics, which are almost universally used across most disciplines, including market research, imply that we are able to arrive at a definitive conclusion on the matter in question if only we are able to use our tools accurately and effectively, in other words that “uncertainty is something that is intrinsic to the experiment rather than something intrinsic to our ability to understand the world.”

In contrast to frequentist statistics, “Bayesian” statistics require us to think about the world in a connected, consequential manner, rather than in an isolated way. The Bayesian statistician is required to identify the ‘prior probability’ of a hypothesis being true. Evidence is then collected that allows us to estimate the probabilities that the hypothesis is true or false. These are applied to the prior probability to derive a ‘posterior probability.’ And, of course, the posterior probability can become the new prior probability when new information comes in. Silver works through a variety of examples of this process, from analysing breast cancer probabilities, to the likelihood of terrorist attacks, to the drivers of climate change.

Identifying a prior probability is often considered to be highly subjective, yet this is typically what happens in social science – we consider the context of our findings before giving them undue weight. To take a simple example, consumers often over-estimate their propensity to purchase a new product or service. Bayes forces researchers to look at the context of the broader market for new products and services and requires us to generate a prior probability based on that success rate.

Silver points out that the lack of explicit focus on the broader context can mean that we generate false positives – studies that have statistically significant findings but which are manifestly wrong, such as one purporting to show how toads can predict earthquakes. Other influential voices also point out the ways in which the intentions of the researcher and their effects on experimental design render studies based on frequentist statistics particularly vulnerable to false positives.

Big Data
To date, market research has been able to manage these challenges through the intelligent application of frequentist statistics. Of course we look at the context prior to deciding which analysis to run, of course we apply market knowledge to weed out false positives. As market research moves into the sphere of big data, however, there are serious implications.

Much of the speculation around big data has effectively dismissed the need for an understanding of context. Indeed, Chris Anderson famously championed the perspective that we no longer need to really understand consumers or hold theories of human behaviour – we can simply use large computers to uncover the important patterns and trends. Anderson suggests that some sciences have drifted into “arid, speculative theorising,” with the implication that in the meantime big data is breaking new ground.

So the heritage of commercial big data is firmly rooted in highly numeric disciplines, including statistics, computer science, applied mathematics and economics. All very useful, but not necessarily areas in which analysts will have the experience and understanding of the market context to be able to estimate prior probabilities.

This potential for a lack of understanding of the context is compounded by the sheer volume of big data, pointing to the need for machine-based analytics systems. This further removes the analyst from integrating a contextual view of the data and as such makes it likely that the number of false positives obtained will be high. The sheer volume of data analysis that is run means that even the most intelligent frequentist statistician will struggle to manage taking context into account. This seems to call for machine-based Bayesian approaches in which contextual information is integrated into the analysis.

Watts suggests that perhaps the best we can do when attempting to make predictions within complex systems is to model the probability of particular outcomes, as a Bayesian approach is good at delivering.  This gives the recipient a much better understanding of how robust the prediction is and as such gives guidance to the appropriate business response. 

Rain, rain, go away!
However, reporting predictions in terms of probabilities approach also has its issues as the study of weather forecasting illustrates – a complex system if ever there was one.  Many studies have found that we are poor at dealing with probabilistic guidance – we don’t want to know if there is a 40% chance of rain tomorrow, we definitively want to know if it will rain tomorrow so we can take appropriate action.  And these same principles apply to business decisions – a product manager will generally be underwhelmed by receiving guidance saying there is a 50% chance of generating sales of say, $50m.  The language traditionally demanded by the business case is way more definitive than that.  Yet perhaps we need to work at better communicating in this way to avoid the inevitable overconfidence that results when providing predictions of absolute numbers within complex systems.

Given there is so much variance in the data then in many situations a probabilistic prediction is perhaps as much as we can do.  So whilst we can predict the way in which the aerodynamics of an aircraft wing will work with great confidence, trying to predict the sales volumes of a new product has much more variance that is hard to measure and as such a probabilistic approach may well be more appropriate.

What are the chances?
Debates are currently raging about the relative merits of each approach, and pragmatists among us argue that there is no difference between the sensible Bayesian and the sensible frequentist. However, Bayesian approaches certainly chime with the complex, nuanced world we find ourselves in, and a tool that helps us to think more in a more probabilistic manner with a strong focus on the wider context may well be better suited to our times.

Colin Strong is managing director at GfK NOP Business & Technology in the UK

 

1 comment

Oleg Klepikov June 13, 2013 at 8:17 pm

It is possible that people will not understand fully Bayesian approach – because it means that you will see the results of, say, 60% chance of rain in the next morning and all similar mornings with the same model of the weather and some other similar changes in the environment … but it will take at least some time to feel this level of probability …
I would say there are two approaches. One of them is to use a large probability of the results and make the big bet on them. On the other – small bets for not very large probability. That is kind of trading on the stock exchange. However, for a manager to assume this could means the ability to make mistakes. And sometimes in business to be wrong is the same as being fired after that … It’s kind of a vicious circle

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles