Essay

On becoming posthuman

Big history and the future

YOUR 185-MILLIONTH GREAT grandparents were fish. Your descendants will not be human forever. In a world of climate change, rising automation, gene editing and advanced artificial intelligence, modern humans are experiencing unprecedented technological disruption and global transformation. At this juncture, biological evolution is no longer the major driver of change in the biosphere – as a toolmaking, technologically advanced species, we find ourselves holding the evolutionary reins. Although we stumbled into this role unwittingly, we are increasingly aware of our power and significance in the broader evolutionary sequence. With brains capable of inventing modern computers and artificial intelligence, we have begun to semiconsciously seed new types of intelligent beings: technological ‘mind children’ that many experts predict will one day far eclipse us. The roboticist Hans Moravec believes that ‘within the next century’ our mind children ‘will mature into entities as complex as ourselves, and eventually into something transcending everything we know’.[i] They will be our post-biological offspring and we, their human parents, will fade.

Such prospects are not easy subjects to engage with. Although change is an evolutionary constant, most of the changes in planetary and human history have happened over many generations, if not millions or billions of years. Never before have humans been faced with the possibility of transcending human nature in the span of a single lifetime. At this transformative juncture in history, it is imperative that we cultivate robust, scientifically informed worldviews that can help explain how we got here, why things are changing so rapidly, and why, barring any major setbacks, we will ultimately become superhuman – or develop more advanced forms of posthuman life that will supersede us.

 

THERE IS A modern origin story that can help us understand both these novel phenomena and future possibilities. The story is ‘big history’ and it can provide modern humans with a map of reality that places these ideas in their broadest historical context. Unifying the most current theories across the scientific disciplines – from the laws of thermodynamics to continental drift to evolution by natural selection – this map illustrates, in a ­deep-evolutionary sense, where we come from and what we are. It zooms out beyond the scale of tribes, nations and human history to present a panoramic view of 13.8 billion years of cosmic evolution. At this scale, we can see how everything is connected, from radiation and atoms to stars, galaxies, microbes, cities, technology, and you and me.

Like all other origin stories, big history frames modern phenomena in a larger context and helps orient our lives, values and societies in relation to a larger story of existence. But, unlike traditional creation myths, big history draws upon the most current scientific knowledge to tell this story. It is a narrative framed by eight major turning points, or thresholds of increasing complexity: the big bang, the ignition of the first stars, the emergence of new and more complex chemical elements, the formation of Earth and the solar system, the rise of life on Earth, collective learning, agriculture, and the rapid changes of the modern era, beginning with the industrial revolution and extending into the information age. Honing in on these turning points and highlighting the relationships between them, big history helps us to understand how a universe that initially contained only very simple forms of matter could have evolved over billions of years to give rise to things that are as complex and dynamic as human cultures and technologies.

We can summarise the direction and implications of the big history story as follows. As the universe slowly cooled and matter stabilised after the big bang, physical evolution gave rise to new and more complex things like stars, which in turn gave rise to heavier and more complex chemical elements. Over billions of years, a new emergent property of chemical evolution appeared on at least one planet in the universe – as life on Earth – and so the great chain (or rather the web, or tree) of biological evolution began. We humans are products of billions of years of physical, chemical and biological evolution, but we are also the direct progenitors of a new kind of evolution on Earth: technological evolution. And in this new paradigm, changes are happening astonishingly fast.

When life first emerged on Earth around 3.7 billion years ago, organisms were extremely simple and evolution occurred at a glacial place. But the intervals between salient evolutionary events have continually decreased at an exponential rate. It took around two to three billion years for the first multicellular life to appear. Yet it only took hundreds of millions of years for the more complex animals of the Cambrian explosion to start cavorting about the Earth. For reptiles to emerge, the time interval roughly halves to around 200 million years. Mammals then appear in roughly half the time again, after about 100 million years.

A major new turning point in the big history story occurs with the rise of anatomically modern humans and our unique capacity for collective learning. Collective learning is a kind of information revolution in human history that has been ongoing since our species first developed symbolic language. Prior to this, DNA was best able to transmit crucial information about life across generations. The trouble with DNA is that a random mutation in a single individual won’t survive unless that individual lives long enough to pass on their genes. If, over generations, that mutation confers a survival advantage in a given niche it will continue to be passed down, but it will take a long time for it to spread throughout a population. DNA is also incapable of passing on acquired knowledge, as humans can through symbolic language.

Language and storytelling gave humans a new set of tools to extend the reach of our minds and share collective wisdom, making us smarter as a group than we are as individuals. Because learned information survived intergenerationally and could be updated over time, we were able to build on our discoveries, which allowed them to become more complex and sophisticated at shorter intervals. In effect, collective learning gave humans the ability to exert greater control over planetary systems, eventually enabling us to outflank biological evolution as a driver of change. As far as we know, we humans are the first emergent property of evolution to represent the universe (or a tiny part of it) becoming self-aware. As such, we appear to be significant on a planetary, and perhaps even cosmic, evolutionary scale.

After living as hunter-gatherers for more than a hundred thousand years, humans began to transition to an agricultural way of life around 10,000 years ago. The adoption of sedentary agriculture necessitated considerable technological and cultural innovation. In areas with readily domesticable crops and animals, the humans who adopted farming were able to extract far more caloric energy from their local environments than the humans who continued to practice hunting and gathering. As a result, larger populations and more complex power hierarchies and modes of social organisation began to emerge. As surplus food and secondary products accrued it became necessary to keep track of wealth, leading to the development of numbers, counting systems and written records. Writing was a revolutionary new way of storing and transmitting information across generations with greater fidelity, and writing technologies continually improved at an accelerating rate.

In the final moments of our story, in less than 250 years, agricultural ways of life have been displaced by the industrial revolution and the rapid changes of the modern information age. Many of the most radical transformations in human history have occurred in these past 250 years – a tiny interval of evolutionary time. Unleashing a tremendous payload of energy stored in fossil fuels, human societies grew, complexified and globalised at an astonishing rate. Humans began to experience cascades of major revolutionary changes throughout their lives. Those who were born in a world of horse-drawn buggies, ice boxes and telegrams lived to see the emergence of modern cars, refrigerators and televisions – not to mention the contraceptive pill, the rise of commercial aviation, the advent and use of nuclear weapons, and the successful landing of humans on the moon.

The global population doubled twice in the twentieth century, skyrocketing from 1.65 billion in 1900 to over six billion in 2000.[ii] Although the rate of growth is slowing, we are fast approaching a population of eight billion. In short order we have developed powerful machines from steam engines to computers that have enhanced productivity and led to dramatic growth in global GDP. Concurrently, we have been consuming energy and resources at a phenomenal rate, contributing to exponential increases in atmospheric carbon dioxide levels, ocean acidification and declining biodiversity. As a result, we are now having to confront the disquieting idea that the prolonged existence of humanity as we know it is unsustainable.

We are living at an evolutionary tipping point. Either we use more advanced technologies to dodge our current sustainability woes and accelerate our advance towards posthumanity, or we remain human and battle with an ever-growing number of anthropogenic and natural existential risks, from climate change and nuclear proliferation to the threat of super­volvanoes, superearthquakes and asteroid collisions, as well as the emerging risks posed by the rise of superintelligence, self-replicating nanotechnology and bio­engineered pathogens. As unenhanced humans weilding powerful technologies with ape-brains, we are extremely vulnerable in our current form. The longer we remain human and earthbound, the more likely we are to face extinction. Our best hope for the future is to seize the evolutionary reins and use greater-than-human forms of artificial intelligence to help us confront the unique and rapidly proliferating challenges of the modern world – while taking care to develop AIs with values and motivations that are aligned with our goals of surviving and flourishing. The only caveat is that, with technology evolving so rapidly and merging ever more with human biology, we may have to accept a form of survival that renders us posthuman.

 

THE IMPORTANCE OF the accelerating pace of cultural and technological change cannot be overstated. In the realm of information technologies in human history alone, symbolic language probably appeared around 100,000 years ago – after which it took roughly 95,000 years for humans to develop writing. Block printing was adopted in China several thousand years later from the seventh century AD; moveable type was developed in China in the eleventh century, and only 400 years later Johannes Gutenberg’s revolutionary modern movable type system was invented. By the sixteenth century, a global system of communication and transport emerged. In the eighteenth and nineteenth centuries these networks became markedly more efficient and complex with the advent and widespread adoption of news­papers, modern postal services and the telegraph.

The telegraph was invented in the 1830s and began to be widely used in the 1860s, but it took less than a century for the telephone, radio and tele­vision to follow. The personal computer appeared only a few decades after television, and the mainstream rise of personal computing and the modern internet, enabled by Tim Berners-Lee’s invention of the world wide web in 1989, fall only about a decade apart. Although it feels like they’ve been around forever, smartphones have only been with us for eleven years, swiftly out-evolving their mobile predecessors and morphing into powerful pocket-sized computers that place a global repository of information at our fingertips. Social media has also evolved extremely rapidly in the past decade. With software updates regularly beaming in, we now have constant access to new generations of technological improvements for our products and applications. Framed by the larger sequence of biological and human evolution, it’s clear that the current technological status quo is extremely unlikely to represent the pinnacle of technological development. Nor is it likely that decades or centuries will pass before we see major new changes.

In 2016, Mark Zuckerberg announced that Facebook was working to develop AIs that can outperform humans in ‘seeing, hearing [and] language’, and claimed they would likely succeed ‘in the next five to ten years’.[iii] The burning question for humanity is how long it will take for computers to exceed human intelligence in all domains. Ray Kurzweil, Google’s director of engineering, believes that this will happen around 2030. Many other researchers in the field of artificial intelligence believe that it will happen sometime this century. The median estimate given by experts in a recent survey is of a 50 per cent chance of high-level machine intelligence capable of carrying out human tasks with average human proficiency by mid century. That rises to 90 per cent when the date is extended to 2075. The same experts also expect ‘that systems will move on to superintelligence in less than thirty years thereafter’.[iv]

Kurzweil has long argued that the rise of superintelligence in the twenty-first century is the next logical evolutionary step. He frames the history of cosmic evolution in terms of six epochs of paradigmatic change, emphasising that, over time, information is stored and transmitted more efficiently as evolution itself evolves. Consequently, major paradigm shifts happen at shorter intervals. Once life appears on Earth the acceleration of change is exponential.

In the early universe, information is encoded in atomic structures. Once biological organisms emerge, information about the design of these new evolutionary structures is encoded in RNA and DNA. Then comes the evolution of brains, which eventually leads to the emergence of human brains capable of inventing the next big thing: technology. In the modern world, with huge swathes of information now encoded in hardware and software designs, we find ourselves in Kurzweil’s Epoch Five, a period in which brains and machines are merging. Ours is the age of hybrid intelligence, but it is also an age of phenomenal evolutionary acceleration; Kurzweil believes that the next step is for the machines to rapidly out-evolve us and proliferate throughout the universe. Regardless of whether he is right about the specifics, it’s certainly true that modern artificial intelligence is evolving at a staggering rate and could become superintelligent this century.

Compare the millions of years it took for our hominid ancestors to evolve into modern humans with the time it took the deep-learning algorithm AlphaGo to evolve from playing its first game of Go to defeating the best human champions. DeepMind’s AlphaGo project began in 2014 and by late 2015 the AI had defeated the European Go champion Fan Hui five games to zero. In 2016, AlphaGo defeated the eighteen-time world Go champion Lee Sedol four games to one. These early versions of AlphaGo learned to play by studying thousands of games of Go. Unlike a human, an AI can do this incredibly quickly, commiting many more patterns and heuristics to memory than a human brain can hold and upgrading its abilities constantly – all without needing to eat, sleep or rest.

In 2017, a new and more powerful successor, AlphaGo Zero was born. Unlike its predecessors, AlphaGo Zero began its career with no knowledge of the rules of Go – it learned by playing against itself without training or guidance. In only three days, with a blank-slate mind, it surpassed the earlier version of AlphaGo that defeated Lee Sedol in 2016, winning a hundred out of a hundred matches. After twenty one days, AlphaGo Zero surpassed a 2017 version of AlphaGo called AlphaGo Master. After forty days, it had surpassed all previous versions of AlphaGo and was unbeatable.[v] After only a few days of play, the AI was superhuman, but it evolved extremely rapidly, becoming super-superhuman soon after – a harbinger of things to come.

 

OF COURSE, OUR Palaeolithic ancestors didn’t plan to become a new evolutionary force capable of inventing superhuman algorithms when they were eking out their existence on the African savannah, or building the tools and technologies that allowed them to migrate to new continents and populate the planet. Everything they did in their lifetimes was in the service of their immediate and near-future survival. Similarly, we are not currently planning, as a human collective, to invent our posthuman successors. In the short term, we are aiming to solve problems such as climate change, create greater economic efficiencies through automation, and develop life-extension therapies and forms of preventative and personalised medicine that can help ease the massive economic burdens of ageing populations around the world.

But to achieve these goals, humans are investing heavily in the development of advanced biotechnologies, robotics, nanotechnology and artificial intelligence. These technologies themselves are evolving at a phenomenal pace and they are already in the process of transforming every facet of human life; they could soon catapult us across big history’s next turning point or threshold. Imagine a hundred years in which the sort of change witnessed in the twentieth century is multiplied by hundreds, even thousands of times. That’s the Kurzweilean vision, extrapolating from a trend of billions of years of evolutionary acceleration. It’s easy to be sceptical and argue that we don’t seem to have experienced multiple twentieth century’s worth of change so far. That’s true, but remember: if Kurzweil is right (and of course he might not be), most of those rapid doublings will happen right at the end of the sequence – just as the monumentally rapid transition to the industrial era happened right at the end of a much longer sequence of at least 100,000 years of hunting and gathering, and 10,000 years of agrarian life. Following this logic, the next major compression of evolutionary time could be so jam-packed with change that we won’t just adopt new ways of life in decades or years: we will witness the end of the human era and the rise of posthumanity.

 

HUMANS OFTEN LIKE to think of humanity as an intrinsically good and stable entity that can be preserved in its current form for a long time to come. But at some time in the future (quite possibly this century), humans will cease to exist. Extinction has been the fate of 99 per cent of all species that have ever lived, yet we cling to myopic visions of the future that emphasise the survival of humanity in its current form.[vi] This is probably for the same reasons we used to believe that sending humans to the moon and creating life in a test tube would remain the stuff of science fiction forever: we are local, linear thinkers with Palaeolithic brains that struggle to comprehend exponential change.

But although change is an evolutionary constant, the change in our world is now extremely rapid. Our version of ‘normal’ entails experiencing more disruption and technological progress in months, days and hours than our Palaeolithic ancestors experienced over hundreds of thousands of years. We also live in a world of profound global challenges, from a still-growing global population to climate change and the challenges of global diplomacy in a world of nuclear weapons, bioweapons and lethal autonomous weapons. How will we agree about how to safely develop advanced artificial intelligence and equitably distribute the productivity gains such a breakthrough will generate? How will we mobilise as a global community to prevent or contain a major pandemic? How will we help the global population weather the impacts of future shock as the robots take our jobs, upend our lifeways, and usher in new and barely fathomable modes of being?

One of humanity’s most important short-term challenges will be to collectively recognise and consciously strive to overcome the many limitations of human thought and perception that are not optimally evolved for global diplomacy, foresight or existential risk mitigation. Our hardwired cognitive biases, tribal instincts and susceptibility to short-term thinking and decision-making are very dangerous traits to retain. Unfortunately, we cannot simply out-think them. We need to consciously change the human condition on a fundamental, biological level, and merging our ape brains with higher forms of machine intelligence is one way to address this. Our larger visions of sustainability must start to encompass the prospect of becoming more than human, and ceasing to exist in our current form. Back in 1923, reflecting on humanity’s growing arsenal of powerful new technologies, the biologist JBS Haldane mused: ‘It may be urged that they are only fit to be placed in the hands of a being who has learned to control himself, and that man armed with science is like a baby armed with a box of matches.’[vii]

In this turbulent century of escalating promise and peril, it’s more intelligence and more technology that we need if we are to have any hope of surviving – or paving the way for the survival of our posthuman progeny. It’s time to embrace the prospect of becoming more than human and ceasing to exist in our current form. Granted, inventing forms of posthuman life that are more intelligent than us is hardly guaranteed to solve all the problems of the world in a neat, utopian fashion. But remaining human in perpetuity is simply not an option. In the true spirit of dynamic sustainability, we should strive to keep the better angels of human nature alive as we coevolve with technology. Let us hope that, as they come of age, our mind children will allow us to merge with them, and that, in some form, the best legacy of humanity survives.

 

Read more about big history

David Christian ‘The history of our world in 18 minutes’ TED March 2001

David Christian Origin Story: A Big History of Everything (New York, Little Brown, 2018)

David Christian Maps of Time: An Introduction to Big History (Berkeley, University of California Press, 2004)

 

Read more about posthumanity and the rise of intelligent machines

Nick Bostrom Superintelligence: Paths, Dangers, Strategies (Oxford, Oxford University Press, 2014)

Ray Kurzweil The Singularity is Near: When Humans Transcend Biology (New York, Penguin, 2005)

 

References

[i] Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge, MA: Harvard University Press, 1988)

[ii] Max Roser and Esteban Ortiz-Ospina ‘World Population Growth’ Our World in Data April 2017 https://ourworldindata.org/world-population-growth

[iii] Ben Popper ‘Mark Zuckerberg thinks AI will start outperforming humans in the next decade’ The Verge 28 April 2016 http://www.theverge.com/2016/4/28/11526436/mark-zuckerberg-facebook-earnings-artificial-intelligence-future

[iv] Vincent C. Müller and Nick Bostrom ‘Future progress in artificial intelligence: A survey of expert opinion’ in Fundamental Issues of Artificial Intelligence, ed., Vincent C. Müller (Berlin: Springer), 553-571

[v] Demis Hassabis and David Silver ‘AlphaGo Zero: Learning from scratch’ DeepMind 18 October 2017 https://deepmind.com/blog/alphago-zero-learning-scratch/

[vi] Edward O. Wilson ‘A Biologist’s Manifesto for Preserving Life on Earth’ Sierra 12 December 2016 https://www.sierraclub.org/sierra/2017-1-january-february/feature/biologists-manifesto-for-preserving-life-earth

[vii] J. B. S. Haldane, Daedalus: or Science and the Future (New York: E. P. Dutton & Company, 1923)

Get the latest essay, memoir, reportage, fiction, poetry and more.

Subscribe to Griffith Review or purchase single editions here.

Griffith Review