Uncategorized

The Reductionist Agenda of Big Data

Colin Strong

There is something beguiling about apparently simple solutions making them very hard to turn down. Behavioural economics has shown us time and again that we hate uncertainty and strive to reduce this by quickly adopting explanations. Perhaps Evgeny Moroviz is arguing something not dissimilar when he says technology is increasingly encouraging ‘solutionism thinking’, described as:

Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized–if only the right algorithms are in place!

He cites urban design academic Micheal Dobbins who has criticised ambitious building programmes as reaching “for the answer before the questions have been fully asked.”  Much of Moroviz’s ire is aimed at the way this is applied in government and the public sector, where we he considers ‘sexy, monumental, and narrow-minded solutions’ are applied to ‘problems that are extremely complex, fluid, and contentious’.

Morovitz’s brilliant polemic is essential reading for anyone interested in exploring the way in which technological explanations are in danger of becoming the lens through which we see the world.  But surely this should bring those of us embracing the opportunities of Big Data up short.  Are we in danger of a form of ‘solutionism’ ourselves, looking for simple answers in the face of the complexities of human consumer behaviour?

The reductionist agenda of big data
Maybe we need to step back for a moment and look at what much of the current practice of big data analytics actually involves. It’s perhaps somewhat concerning that much of the time it is the job of the much vaunted data scientist to piece together the pieces of the jigsaw to deliver predictive analytics around consumer behaviour. Concerning because actually much of the time this seems to involve ‘narrow minded solutions’ to ‘problems that are extremely complex.’

The big issue here is reductionism. Since when did we think that an analysis of behaviour alone was an adequate way of understanding human behaviour?  Social scientists considered these wanting when Behaviourism was abandoned in favour of more emergent cognitive and social explanations as far back as the first half of the twentieth century.

But, the data scientists may argue, do we actually need to understand consumer behaviour?  If we know what consumers do in a predictable way then do we actually need to understand the reasons behind it?  As Mayer-Schönberger and Cukier,  put it, ‘Knowing why might be pleasant, but it’s unimportant for stimulating sales’, as what is ‘good enough’.

But behind this enticing perspective sits an awful lot of complexity that’s perhaps not always fully examined, not least around the value of correlations and the need for contextually based interpretation.   If the ground starts shifting then those correlations simply cease to be relevant; 2013 suddenly saw the historic data purchase behaviour for low cost meat-based products become irrelevant overnight in the face of the horse-meat scandal.

A more considered view
What is needed is a more intelligent, considered view of the opportunities that Big Data represents.  It certainly provides a granular record of human behaviour that is historically unprecedented.  Hans Eysenck, one of the early pioneers of psychology generated his famous personality theory by studying soldiers in WWII who were being treated at the hospital where he worked as a staff psychologist.  Just think what he could have done with the material available from our digital traces.

But the desire for ‘solutionism’ runs deep and is perpetuated by many technology providers who are proffering solutions to augment our inadequate capabilities.  Take the current rash of wearable technology that allows users to log thousands of photos of their lives every day. A great facility for market researchers of course but surely it’s questionable whether this adds or detracts from the experience of being human.  Some things we would rather forget.

Yet more and more we are seeing metaphors of humans as flawed versions of technology which Moores Law means will be augmented.  The marketing literature of most marketing technology providers now routinely kit out their offering with anthropomorphic qualities –  increasingly technology is seen as the ‘brains’ of the operation. Are we really going to reduce human capabilities in this way?

The creep of technology solutionism
And technology based approaches are slipping into wider aspects of business culture.  For example, hackathons have long been used by the tech community to foster rapid innovation.  Participants, typically programmers, are placed in competition to develop a solution, often an app, in a short space of time.  This is starting to gain popularity among both commercial and public sector organisations but frequently with different sets of teams and ambitions – often cross disciplinary teams looking to resolve often quite deep seated issues that the organisation faces.  Nothing necessarily intrinsically wrong with the approach but the expectations of what can be achieved in 48 hours with beer and pizza may be unduly high.  Simply because this has worked in the tech sector often seems to imbue these approaches with a magical quality – again what Moroviz might consider to be solutionism at work.

Making the case for human centred approaches
The research industry has rightly embraced technology – both as a data collection tool and now as a means for analysing the amazing opportunity that Big Data represents.  But unless we are smart about it, there are pitfalls in too readily accepting explanations of human behaviour which don’t do justice to the nuanced understanding that has been built up not only in our industry but of course in academic institutions over many years.

Understanding humans is a messy business that we don’t always get right but in that way the research industry is no different to the army of social scientists whose business it is to understand humans.  There are many tools available to us and the smart researcher is multi-skilled, experimenting with different methods depending on the purpose of the study.

But the research industry now faces a challenge –accept the technology based solutionism agenda or start to articulate an alternative, ‘human’ based approach.  If we accept the latter, this does not mean a rejection of technology but a desire to make the case for explanations which have their roots in something other than a reductionist, technology agenda.  It’s using technology as our tool and not as a paradigm that defines our solution.

So the market research industry needs to articulate how we can work with data scientists to generate more value for brands.  And not just filling in the gaps but effectively providing the paradigm within which all parties are working.  Ambitious?  Maybe.  But frankly but I struggle to see who would do a better job.

Colin Strong heads up technology research at GfK in the UK. He is particularly interested in the way our lives are increasingly mediated by data – between consumers, brands and government.  Colin is interested in exploring the opportunities this represents for a better understanding of human behaviour but also to examine the implications for brand strategy and social policy.

1 comment

Edward Appleton October 15, 2013 at 10:43 pm

Colin – interesting. Recently came across this website on the Quantified Self – http://www.quantifiedselfresearch.org – with very interesting take, talking about “a series of influential agendas in the form of smart cities, big data, the quantified self, and algorithmic culture”. The talk is of technocracy and neo-liberalism – beyond simple technology-focus. Just food for thought. Edward

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles