One is the loneliest number
That you'll ever do
Two can be as bad as one
It's the loneliest number since the number one
– Harry Nilsson
DON'T KID YOURSELF. Your mind and its contents, memories and neuroses, creative impulses and curiosities, are not your own. From birth, there's a merger going on that rivals the efforts of any great mogul, and your self is just one component of this cyborgian enterprise. Your cognitive scaffolding is as much social and technological, as it is of the flesh. Reach out and touch the world, and you're rummaging among the crevices of your mind.
But where does this strange possibility leave one's individual autonomy and the boundaries of the self – you and me, us and them? To explore the impressive scope of recent thinking about our networked mind, consider first the history pages of psychology and the roots of our unconscious mind.
In the deepest folds of slumber, what sense can we make of the novel occupants of our dream world? When we dream, read a fairytale or go to the cinema, do we repeatedly encounter universal themes and characters? Psychologist Carl Jung famously thought so, and saw a connection between the figures in our dream stories and those in our popular mythologies and stories. These are his so-called archetypal symbols, among them the Shadow Figure, the Hero Figure, the Anima and the Great Mother. Together they occupy what Jung controversially coined the Collective Unconscious that, he proposed, unites the contents of our individual psyches.
Though Jung's tenuous idea infuriates many, its possibilities still intrigue and remain threaded through the fibres of popular culture in film, prose and the visual arts. Perhaps its enduring seduction is that it helps us feel less lonely in moments of solitude, offering us the prospect of a virtual community of the mind. After all, implicit in Jung's theory is the belief that at some level we are all cognitively connected in some way, that the "persona is a collective phenomenon".
TODAY, MOST SCIENTISTS view the possibility of a collective unconscious as esoteric hocus-pocus – mystic madness even – impossible to measure and too woolly to embrace. And who can blame them?
Nevertheless, the notion of a networked, connected self appears to be finding a new, more material footing, as metaphors of the mind merge with 21st-century science and technology. "Connectionism" has become all the rage – from studies in artificial life, neural networks to chaos theory, self-organisation, complexity and cognitive modelling. These are just a handful of the mind-expanding fields embracing the idea that there is more than the power of one.
Swarms of bees, colonies of ants, stock markets and high-rises of humans – all social communities – are now considered to have an emergent intelligence, even a global consciousness of sorts. As a thriving, writhing mass, might they possess a combined acumen beyond the stupidity of any stray bee, ant or human? It sketches an apocalyptic vision that would have had writer John Wyndham, author of The Triffids and other great sci-fi classics, itching with fictional possibilities. "Attack of the Brilliant Bees" – Hollywood, here we come.
In another vein, the push for "connectionism" is an interesting development, because ideas that espouse a more global version of reality have long been hotly disputed in science, even seen as a fundamental challenge of the postmodern variety to its rational enterprise.
Recall the contentious Gaia hypothesis, proposed by atmospheric chemist James Lovelock in the early 1970s. The Earth, Lovelock hypothesised, is a self-regulating, self-equilibrating living organism in and of itself – more than just home to a vast and diverse population of discrete yet interconnected organisms, systems and species. His idea is still embraced by some environmental and spiritual thinkers but has fallen out of favour among mainstream scientists who are entirely sceptical of the implications of such a grandiose vision.
"The Gaia hypothesis attracted the most attention from theologians interested in the possibility that the Earth controlled its environment on purpose, from those looking for 'oneness' in nature, and from those defending polluting industries, for whom the Gaia hypothesis provided a convenient excuse whereby some collective set of natural processes would largely offset any potential damages from human disturbance to earth systems," wrote Stephen Schneider and Penelope Boston in their book Scientists on Gaia (MIT Press, 1991).
THERE ARE MANY, and more acceptable, examples of "big" ideas that have historically received a cool reception in scientific circles. The search for physical or "neural" correlates of human consciousness is one of them (though that's rapidly becoming a mainstream effort now); and Galileo's original challenge to the medieval cosmos could be considered another.
Engineering and science have traditionally relied on reductive thinking to get on with their core enterprise of problem-solving. This capacity is both a commendable strength – it provides an essential and practical focus – and a weakness. Putting the blinkers on can mean missing the bigger picture, possibly the most important questions and, frequently, the interconnectedness of things. As we well know, the latter can have disturbing consequences and has been the basis for much of the criticism of the tech-fix approach. Technological solutions can create as many unforeseen problems (Chernobyl, thalidomide, industrial displacement) as they solve (nuclear medical technologies, cancer treatments, workplace safety).
My experience as an engineering undergraduate in the early 1990s firmly cemented this problem in my mind. Four years of grappling with fluid mechanics and differential equations, sitting in enclosed laboratory classes and producing single-number solutions to simplistic questions just didn't strike me as sufficient. How was I ever to negotiate the full breadth and might of the human-industrial complex, with all its predicaments of policy, risk, history and social need? Admittedly, that's not the job of any single engineer, or novice undergraduate for that matter, but the bigger connections and context were absent and that deeply frustrated me.
MORE RECENTLY, THOUGH, especially among the mind and computer sciences, scientists and philosophers have become bravely expansive and are thinking in terms of networks rather than discrete systems or individual components. "There is a path between any two neurons in our brain, between any two companies in the world, between any two chemicals in our body. Nothing is excluded from this highly interconnected web of life," suggests physicist and writer Albert-László Barabási in his book Linked: The New Science of Networks (Perseus, 2002).
Take the intricate cobweb of white and grey matter that we call our brain. When the new brain-imaging technologies like magnetic resonance imaging (MRI) and positron emission tomography (PET) really took off, neuroscientists were falling over each other to find the neurological hotspots for autism, for appetite, musical talent, love, fear, for sexuality and schizophrenia. Every imaginable human trait, emotion and disorder was put under the pinhole scrutiny of these tools.
To some extent that chase is still on, though today, thankfully, there is a deeper understanding that the brain is a much more sophisticated beast, with its different components wired together through vast and complex networks of more than 100 billion neurons. It's a work in progress but the brain is now also understood to be more plastic, continually evolving as we learn, live and grow. If a stroke from a blood clot or head injury does damage to one area, this can often be slowly accommodated by other parts of the brain, as neural networks adapt and rewire to take up the slack.
This back-pedalling from the basic is occurring in genetic studies, too. Deterministic deliberations about a possible gay gene, an obesity gene, a gene for psychopathy, though they make for revved-up newspaper headlines, have given way to a more subtle appreciation of complex multiple gene-environment interactions that together define the moveable feast that is the human condition.
PERHAPS THE MOST interesting conundrum that still faces brain scientists is the question of how the brain creates the mind. Many argue that the mind, our subjective conscious experience, is a product of the firing and wiring of our neurons, and that the brain and psyche should never have been ruptured way back when René Descartes dreamt up the mind-body split.
But contemporary philosopher Andy Clark makes the case that there's more to our mind than the soft substrate of the brain. He suggests that the mind extends far beyond the contents of our skull-cave and our "ancient skinbag" (as he describes the human body, a delicious and grounding metaphor). To Clark, the mind is a networked entity that intimately connects our brain to the external world. Our mobile phone, our computer, fork, pen or street map, are all crucial scaffolding of what he describes as our extended mind. Without these tools we'd get lost, couldn't communicate, paint, write or, and this is key ... think.
And it gives the cemented tradition of Australian mateship something of a new hue, too. Our social networks, mates, family, work colleagues, even strangers on the street, are also a fundamental part of our extended mind. We're forever cognitively coupling with the minds of others to achieve shared goals – a conversation, a financial transaction, a rugby game, a teleconference, an international negotiation ... a war on terrorism. Our fluid self never quite begins and never quite ends. We are all "natural born cyborgs", cognitively defined by and dependent on the social and material props around us. From birth, we are bodies and brains fused with technology.
Intuitive as it sounds, Clark's take on the human mind appeals immensely. It represents a healthy acknowledgement of the material culture within which we exist, one that no amount of meditating at 10-day Vipassana retreats to empty our minds can distract us from. And it has more earnest implications, too. For example, Clark argues that to house people with Alzheimer's disease in a nursing home where their photos are put away, bookshelves are bare and their cupboards and drawers closed shut amounts to a human-rights abuse. Without the props of their life and mind around them at their visual beck and call, people with this dreadful disease have little hope of using what decaying mind they have left intact.
IT'S ONE THING to consider our intelligence and consciousness as a product of merging flesh, neurons, people and technology but today, a community of computer buffs is confident that all this can be emulated in the digital realm. This is "life as it could be". The raison d'être of artificial-life researchers is to program populations of digital creatures, many of them online, and watch them evolve, network, breed and mutate like good little genetically driven Darwinians. At a recent A-life conference, one woman cheekily commented to me that perhaps it's no coincidence that most A-life researchers are men. Could it be their way of creating new life without a womb perhaps? But A-life is painted as a serious enterprise and at its core is modelling the extraordinary powers of collective behaviour and group intelligence.
If networks of neurons, minds and digital beings can possess a social intelligence then so, too, can networks of people, computers, bacterium. Writer Howard Bloom lays out his argument for a "global brain", in his bookGlobal Brain: The Evolution of the Mass Mind from the Big Bang to the 21st Century (Wiley, 2001).
"We're not just talking about 6 billion human beings; every bacterial colony the size of your palm has more creatures in it, all communicating with each other and all making smart decisions, than all the human beings that have ever been. When you take all these plants, all these animals, all these human beings, all these non-human beings and all these bacteria and put them together – you've got a global brain. You've got a kind of a superorganism, each of whose components, each different species, is somehow contributing its little bit of knowledge to the grander pool," says Bloom (All in the Mind, ABC Radio National, 2002).
We are all "neurons of this planet's interspecies mind", ventures Bloom, whose reputation is as a serious scholar. In this vein, the networked mind embodied by the internet is a global brain of Bloomesque proportions. Only here, the axons and dendrites of the brain's nerve cells are us and the wires across which we whiz emails in seconds. Bloom's idea of a mass mind is a fantastical thesis that instantly makes the world seem much smaller and offers one enticing rationale for globalisation. When we get together and communicate en masse, who knows what the global winnings in the intelligence stakes might be?
THE GLOBAL BRAIN can seem litle more than intellectual foreplay, but neuroscientist Frank Vertosick, author of The Genius Within: Discovering the Intelligence of Every Living Thing (Harcourt, 2002), also warns against what he calls "brain chauvinism", and the prevailing belief that brains are uniquely the "loci of intelligence".
It's a view that is gathering considerable momentum, in part because of the burgeoning idea of "emergence". It is a serious attempt to mathematically map the surprisingly non-random behaviour of complex networks – from stock markets to suburban neighbourhoods, from computer viruses to highway systems, media frenzies to malignant cell colonies. Unlike traditional reductionism, where the bigger picture is forever broken top-down into smaller pieces, like genes, lobes, inputs and outputs – this is a truly bottom-up approach, where decentralisation is everything.
Emergence refers to the idea that, en masse, the low-level actions and behaviours of many individuals can lead to higher-level sophistication – a "larger project", an "emergent intelligence", a "pattern in time". Populations can be smart, when their individuals may not be. Of course, horrific histories of repeated genocide tell us that the converse is just as true as well.
Steven Johnson, in his book Emergence: The Connected Lives of Ants, Brains, Cities, and Software (Penguin, 2001), takes up the idea, starting with the myth of the ant queen and the primitive chemical intelligence of a mass of mucous called slime mould. The extraordinary collective behaviours of colonies of slime mould,Physarum polycephalum, have attracted an inordinate amount of interest outside of science as well.
In 2000, a team of Japanese scientists reported that a colony of this simple organism had managed to work out the shortest path through a mini-maze. It's a far cry from being capable of a decent academic discourse, but the suggestion is that new medical technologies could potentially exploit ways of communicating with intelligent cells in their own collective language, if they have one. As a single cell, slime mould is a klutz, but as a swarm something unusual happens. And the key here is that no one "leader" or "executive" cell appears to co-ordinate the group action when it happens. The secret is self-organisation. It's revolution, man.
The social and economic implications of network theory and emergent behaviour are claimed to be considerable. Well-known net guru and eccentric (he paints shoes) Howard Rheingold believes that "smart mobs" are the next social revolution, people who are able to "act in concert even if they don't know each other" and "co-operate in ways never before possible because they carry devices that possess both communication and computing capabilities". He's referring to the mobile and global collaboration among internet gamers, text-message activists, eBay consumers, among others, all acting as distributed, self-organising clusters. And, Rheingold argues, business managers could do well to exploit this potential.
The science of complexity and connectivity can paint a cosy vision of a global commune, whether it be within the neuron-rich jelly mass of the brain, in internet chat rooms or among a teeming swarm of single-celled bacteria. But networks can be disastrous, too. When vital hubs fail, things can go anarchically wrong, as the massive power failures across the United States illustrated in early 2003, or the devastation caused by the Melissa computer virus reminded us.
And where does one's autonomy sit in this intertwined picture of connected beings, brains and bacterium? Is our individuality irretrievably diluted by this sea of networks? Perhaps it's no coincidence that in many of the collectivist societies of the world, the "global brain" that is the internet is censored and subjugated under the eyes of state control – top-down stamping out bottom-up self-organisation, well before it has the potential for brilliance.
In Tipping Point: How Little Things Can Make A Difference (Back Bay Books, 2002), Malcolm Gladwell argues that any one of us can be the catalyst for what he calls "positive epidemics". The business analysts have taken to his ideas with rabid, uncritical interest but Gladwell may offer a salient reminder for all of us. In our lives and communities, individual action still speaks loudly. The power of one has exponential resonance – even if there are more than 6 billion "ones" on the planet of the human variety and many more mammals, reptiles, bacteria, fungi, mitochondria ... all just waiting to be cognitively inspired.