Richard Watson
First published in Research World September 2010
Is the digital era turning us on or switching us off?
2010 trends maps have been flying around cyberspace for some time now, so it’s time that they were explained. Not that there’s much to explain.
The maps are doodles – rough sketches of where we are and where we’re heading. They are not anything serious, although indirectly they raise a few serious questions, especially ones relating to how society connects with technology.
It would be a gross exaggeration to suggest that the history of society is the history of technology, but new technologies do sometimes represent major turning points, especially when these technologies intersect with existing attitudinal and behavioural shifts. One technology that looks capable of creating such a shift is digitalisation, although at the moment we seem to be largely oblivious to many of its potential long-term implications.
Most of the current debate about digitalisation surrounds the creation and consumption of digital media, especially the creation of too much information and the death of privacy via social networks, as well as the decline of physical newspapers and books due to the shift from paper to pixels. But digitalisation has much wider implications than this.
Too much connectivity – not enough connection
For example, a ‘community’ used to be a collection of people that shared a physical space (eg, a street or village), but global connectivity has shifted the definition of community towards individuals with shared interests regardless of physical location. This is a good thing in the sense that people with shared interests and passions can now easily find each other, but does this shift towards online communities mean that we are starting to neglect local physical relationships in favour of their digital equivalents?
The death of a three-month-old child called Sa-rang in South Korea recently suggests that we are. Kim Yun-beom (aged 41) and Kim Yun-jeong (aged 25) became so addicted to raising a virtual child called Anima from an internet café in a suburb of Seoul that they allowed their real daughter to starve to death, alone, in their apartment.
This is an extreme example, but it’s not the only example of where the cold logic of computers is taking the warmth out of human relationships. Online relationships are becoming so convenient that we are failing to invest in many of our own physical relationships through face-to-face interaction.
As Dr Kim Tae-hoon, a South Korean psychiatrist, comments: “People are becoming numb to human interaction.”
It’s quite a jump from the death of a three-month-old child to the use of self-checkout aisles in the local supermarket, but they are both part of the same continuum. Self-scanning groceries saves time and means that supermarkets can employ less people, but if customers get used to dealing with machines instead of people then where does this ultimately lead? Some people will clearly be happy that it takes less time to shop, and perhaps that the groceries will be marginally cheaper, but what happens to the individual who used to have a job in the supermarket? Do they get another job? And if not, do they just sit at home playing computer games, raising virtual children, and failing to interact with the rest of society? Furthermore, what happens to the shopper that gets used to using the grocery scanner rather than human interaction? Does their lack of real interaction in retail environments bleed into their personal relationships?
Some people don’t think that this matters. It’s just a generational shift. For instance, in his book I Live in the Future and Here’s How it Works, Nick Bilton says: “I don’t see any lines between real-life friendships that involve talking or looking someone in the eye and virtual ones, where communication is through email or text messages …we may not drink beer or coffee together or arrange birthday or anniversary cards, but we can adore each others’ pets via photo albums on Facebook, send birthday greetings or share funny videos or important news via Twitter.”
Where is the humanity?
Really? So the medium no longer influences the message? Personally I think direct human interaction is important. If you love someone it’s important to tell them so directly rather than just sending them an e-card. And if they suddenly die, your physical attendance at their funeral or, at the very least, a handwritten card would be more thoughtful that a text message (L).
It is generally assumed that digital objects (like mobile phones) and digital environments (like Facebook and Twitter) and are bringing us closer together, but I think there’s an argument that says they are doing quite the opposite. Familiarity is increasing globally, but intimacy is declining, especially at a local level. As a result our physical relationships are becoming superficial and disposable. Far from connecting us, digitalisation is helping to create a society comprised of rude, impatient, narrow-minded, self-centred, stressed-out and aggressive individuals.
Similarly, digital communications are changing what were once public paces into private ones. Now, I’m not suggesting for a second that we get rid of our new digital toys, but mobile phones and PDAs can adversely affect the quality of our thinking due to the level of distraction.
For example, according to Nicholas Carr, author of The Shallows, “A series of psychological studies over the past 20 years has revealed that after spending time in a quiet rural setting, close to nature, people exhibit greater attentiveness, stronger memory, and generally improved cognition.”
The business implications of this are fairly obvious. If we are constantly connected and distracted (answering emails, taking phone calls, updating Facebook pages, sending tweets etc) we are not leaving ourselves enough time to think. We are more likely to make stupid mistakes and less likely to make good decisions or have great ideas.
But it’s not just clear communication that requires a calm and reflective mindset. Digital communications, mixed with everything from urbanisation and individualism to free markets, are also reducing empathy and compassion.
According to Antonio Damasio, a director of the University of California’s Brain and Creativity Institute, empathy and compassion are linked to neural processes that are “inherently slow.” In other words, if we are constantly busy and distracted we not only have less time for others, but we have less time to connect with moral and ethical questions relating to other people’s psychological states. We know what’s going on around the world thanks to 60-second news updates on the BBC (available on any digital device) but we have no idea what our neighbours are called or what they are really thinking.
This, in turn, links to another spin-off from digitalisation, which is personalisation. Personalisation, particularly of media, has some obvious benefits, but if we personalise things too much this can also feed into a situation where compassion is reduced. If all incoming information is personalised (which includes personal communications, but also friendship requests), it’s possible that we will cut ourselves off from serendipitous encounters with ideas, especially those that we do not agree with.
Distributed intelligence
So far I have only spoken about digitalisation and connectivity in the context of the individual, but could digitalisation result in a major shift away from the individual towards the group?
The tendency of large groups to be smarter than any single individual (largely a statistical phenomenon) has been known for some time, although it took James Surowiecki’s book, The Wisdom of Crowds, to place associated ideas like prediction markets on the corporate radar. Problem-solving (and to some extent idea generation) is more productive when more minds are given the problem.
But while groups are better than individuals at solving complex word and letter puzzles, in other instances less can be more. According to an experiment conducted by scientists at the University of Illinois, physical teams of two perform no better than two individuals. Teams of three do create a significant performance benefit, but if you go beyond three people there is no additional benefit.
Personally, I think there are limits to where collective wisdom can go. If what you’re after is a prediction about an occurrence, testing, or refinement of an existing idea, then a distributed crowd is a very good way to go about it. It has also been shown that richly-connected networks are especially good at solving simple problems or finding information. However, large groups are weaker than individuals when it comes to creating entirely new ideas.
This seems to confirm what I’ve been thinking for many years: if you are after highly original thinking, a single talented individual (or a small group with diverse experience) is a better route than a larger group. Crowds will tend to dismiss any new idea that does not immediately fit with known ideas and anything unusual or different will usually be rejected.
Wising up or dumbing down?
There are currently more than 4 billion mobile phones on the planet and an increasing number of these are connected to the internet. As this connectivity increases there is likely to be a convergence in terms of what people know and think. For example, using Google to find information is narrowing our sources rather than widening our thinking because most people (around 99%) do not proceed beyond the first page of search results.
Given the increasing level of global connectivity, the idea of a ‘global brain’ is interesting and undoubtedly has many benefits. However, large networks can have attitudes and behaviours that are actually more comparable with a single individual than a federation of independent minds.
I should end by stressing that I am not declaring war against digital technology. Google isn’t actually evil. All I am saying is that there are times – and places – where people, not machines, should come first. Remember too that computers are supposed to be tools to help us to think. What computers are especially good at, what they were invented for, is processing large amounts of information, thus uncovering the insights that allow human beings to think deeply, solve problems and have new ideas.
But we have started to use tools such as these to replace or outsource our own thinking. Similarly, email and mobile phones are useful ways of enriching physical conversation and contact, but we have started to use them to replace it. It is this replacement that concerns me the most. Given the fact that we seem to be capable of inventing more or less anything these days, perhaps a question that we should be asking ourselves more frequently in the future is not whether we can invent something but whether we should.
Richard Watson is the founder of nowandnext.com and is the author of two books, Future Files: A Short History of the Next 50 Years and Future Minds: How Digitalisation is Changing Our Minds, Why this Matters and What We Can Do About it.