THE FUTURE IS not a destination. We build it every day in the present. This is, perhaps, a wild paraphrasing of the acclaimed author and futurist William Gibson who, when asked what a distant future might hold, replied that the future was already here, it was just unevenly distributed. I often ponder this Gibson provocation, wondering where around me the future might be lurking. Catching glimpses of the future in the present would be helpful. But then, I think, rather than hoping to see a glimpse of the future, we could instead actively build one. Or at the very least tell stories about what it might be. Stories that unfold a world or worlds in which we might want to live – neither dystopian nor utopian, but ours. I know we can still shape those worlds and make them into somewhere that reflects our humanity, our different cultures and our cares.
Of course, it is not enough to tell stories about some distant or unevenly distributed future; we need to find ways of disrupting the present too. It might be less important to have a compelling and coherent vision of the future than an active and considered approach to building possible futures. It is as much about critical doing as critical thinking. One approach to the future might be to focus less on the instruments of technologies per se and more on the broader systems that will be necessary to bring those futures into existence.
Today, there are many conversations about the future, and artificial intelligence (AI) figures centrally in many of them. Most of these centre on AI’s technical affordances. But AI is always, and already, a lot more than just a constellation of technologies. It exists as a set of conversations in which we are all implicated: we discuss AI, worry out loud about its ethical frameworks, watch movies in which it figures centrally, and read news stories about its impact here in Australia and abroad. AI is part of our cultural fabric. It is also part of a set of increasingly complicated systems – it is not one AI so much as many – and these systems encompass everything from the electrical grid and railway lines to mine sites, lift shafts and food-supply chains. These systems do not just live in our cultural imaginations; they live in the built world, where they consume energy and effort.
How could we think differently about systems – of technology, of people, of culture and country, and of this place? It might involve asking questions for which there are not ready and easy answers. It might also involve touchstones from the past to help inform our present and perhaps our future. History, after all, may not provide the answers, but it should allow us to ask better questions.
WHEN I THINK about AI, one image lingers in my imagination. It is from 1956, a black-and-white photo taken by a woman named Gloria Minsky; she had accompanied her husband to a summer conference at Dartmouth College in New Hampshire. The photo shows seven earnest-faced, comparatively young white men relaxing on the lawn in front of an unremarkable building – among them Nathaniel Rochester, John McCarthy, Claude Shannon and Gloria’s husband, Marvin. These four men were the key organisers of the Dartmouth Summer Research Project on Artificial Intelligence. This is the moment that AI came into being.
These men, all from elite American organisations, had diverse backgrounds and interests. Claude Shannon, at Bell Telephone Laboratories, was regarded as the founder of information theory; Nathaniel Rochester had designed IBM’s first commercial scientific computer, the 701. Marvin Minsky and John McCarthy were both recently minted PhDs: Minsky was on a fellowship at Harvard and had built a very early neural network, while McCarthy was working on the theory of Turing machines and had strong ties to John von Neumann – the creator of the ENIAC, the most important of the world’s first stored-memory computers.
Together, they had gained funding from the Rockefeller Foundation for a summer workshop exploring what they called ‘artificial intelligence’. The list of proposed attendants was, by the standards of the 1950s, an interdisciplinary group, with backgrounds in philosophy and mathematics, psychology and the emerging field of computer science. And the conference had important backers too, both in government (including the military) and industry. AI was not simply an academic preoccupation; it was all about business from its very beginnings.
The funding request laid out the first framework for AI:
…every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.
This was an ambitious agenda, but then the expectations were that computing technology would continue its remarkable growth beyond the ENIAC and IBM 701, which created the impression of an endless evolution of power and potential. As a result, these early founders believed much of their initial research agenda could be achieved within a decade. This was not to be – and perhaps that is just as well.
For their research agenda was missing several important resources and perspectives – namely people, culture and a sense of the broader world in which their AI might unfold. Which is startling, because while AI might have been named and claimed in 1956, much of its intellectual agenda had earlier roots in conversations that started in the 1940s – some of which had included and been shaped by some of these same Dartmouth convenors. Those conversations had taken a much more expansive view on the world of technology, framed around ‘cybernetics’.
As defined by Norbert Wiener, an American mathematician, cybernetics was ‘the scientific study of control and communication in the animal and the machine’, and also in society and in the individual. In particular, for Wiener and others, it was about the study of feedback mechanisms and circular causal systems, including in the newly proliferating space of computers. Indeed, the conversations about cybernetics were energised by, and in direct dialogue with, the strides that were being made in computing architecture and performance, and by the hope that this computational power would help unleash human potential in the sciences and arts. The idea was that cybernetics would inform new ways of making decisions and organising resources – new ways of being and doing, new systems.
Wiener coined the term cybernetics himself, drawing inspiration from the Greek word for helmsman, kybernetes, illustrating his belief that the science of cybernetics would be the science of steering, or control, broadly defined.
It was about a certain kind of power. At the end of World War II, the power of computing – Wiener’s ‘machine’ – was starkly visible, and its potential for scientific, political, economic and social transformation seemed extraordinary. Theorising the relationships between that machine and both humans and the natural world felt critical and timely. Cybernetics was Wiener’s framework for mediating the relationship between people and the new machines, and for processing the technical and other kinds of knowledge this relationship would bring forth. For a time, it worked. Scientific discoveries were aided by computers, as were new forms of business, automation and productivity.
Between 1946 and 1953, the Macy Conferences on cybernetics brought together a range of thinkers from across the disciplinary spectrum to explore the idea of cybernetic systems that would enhance humanity. Curated in part by anthropologists Margaret Mead and Gregory Bateson, the meetings were radically interdisciplinary, and represented an attempt to constitute a new body of academic knowledge and a new discipline. They must have been extraordinary events: ten conferences in all with topics ranging from mind control to memory, octopuses’ consciousness, childhood learning and development, the subconscious, technical systems, computation and abstract linguistics, to name just a few. There is a thread running through many of the conversations about how we might make sense of human cognition as some kind of system, especially, one imagines, in order to help determine whether computation will ever marry or match it. What made something intelligent, and how might it be learned, communicated and studied? Here we see the beginnings of the AI agenda that would follow.
There is something important to be claimed – or reclaimed – from those Macys conversations, for while there was a great deal of interest in how the mind works, there was also a clear and deliberate examination of the role of technology in our lives. In the fading shadows of World War II, it was clear computers would have a profound impact on our futures, and Mead and her contemporaries fretted about how to theorise a cybernetic system in such a way that it could accommodate humans and culture, and even the environment. The atomic bomb was a graphic reminder of the power of technology to profoundly reconfigure the natural world. Participants at the Macy Conferences wanted a different kind of technological future – something far less destructive, although they clearly did not yet grasp the energy needs of the computing systems they were building, and their ultimate cost.
The Macy Conferences captured public attention with their stories of a future with machines, of an automation that would create new kinds of jobs and new kinds of possibilities. Cybernetics featured regularly in the popular press, and the conversations and debate rippled out through the US and beyond. And then, it seemed to disappear.
In an interview many years later, Margaret Mead reflected on those conversations and on the power of an interdisciplinary mix to bring something new into the world. Sitting across the kitchen table from her then former husband Gregory Bateson, with a reel-to-reel tape recorder spinning between them, she would recall:
There were the mathematicians and physicists – people trained in the physical sciences who were very, very precise in what they wanted to think about. There was a small group of us anthropologists, and psychiatrists, who were trained to know enough about psychology in groups so we knew what was happening, and could use it and disallow it. And then there were two or three gossips in the middle, who were very simple people who had a lot of loose intuition and no discipline to what they were doing. In a sense it was the most interesting conference I’ve ever been in, because nobody knew how to manage this thing yet.
I have always imagined that Mead was referring to the mix of people at the Macy Conference when she said that nobody knew how to manage this thing yet. But perhaps Stewart Brand, editor of the Whole Earth Catalog, who would publish the transcript in 1976, heard something more. The Whole Earth Catalog – a remarkable 1960s and 1970s compendium of material culture and how-to figures – was all about imagining a different kind of world and a different kind of future. As the name suggests, one that took the whole of the Earth as a starting point. Brand, through his catalogue and his actions, would re/ignite conversations about cybernetics for another generation. This next cybernetic wave would continue to engage with the future of computing and of humanity, and it would also focus increasing attention on the broader ecological dimensions.
IN 1956, AT the Dartmouth Summer Research Project, McCarthy and his colleagues had speculated that intelligent computers would be capable of creative acts and might make new artistic forms. This certainly built on early cybernetic imaginings from the Macys, and other intersections of technology, culture and design. But the Dartmouth AI quickly focused on areas such as strategy, reasoning, language.
Yet a little more than ten years after Dartmouth, on the other side of the Atlantic, a remarkable woman curated her first major exhibition at the Institute of Contemporary Arts in London – one that brought the future of computers into a very different frame and allowed a broader future to peep through again. It had taken Jasia Reichardt three years – and a lot of arm-twisting, travel, networking, and some funding from IBM and the US Department of State – to pull it off. She called the exhibition Cybernetic Serendipity, and it showcased the work of 325 diverse participants from Europe, North America and Japan. Boeing, General Motors, Westinghouse, Bell Telephone Laboratories and the US Air Force Research Laboratory were all represented, as were artists Bridget Riley and Ulla Wiggen, radical composer John Cage, and others whose work lacked a specific definition, such as Gordon Pask, one of Wiener’s disciples, and Nicholas Negroponte, who would later found the Media Lab at MIT.
The exhibition featured digital music, light, poetry, sculpture – all created with and through computers. Throughout the summer of 1968, as many as 60,000 people roamed its expansive halls. In more than 600 square metres of space they might encounter a potted history of cybernetics alongside a robot that drew, or a Honeywell-sponsored demonstration computer shaped like an elephant, aptly named the Peripheral Pachyderm. There were also works from the Korean-American new-media artist Nam June Paik, computer-generated music and movies, wire-frame graphic representations from Boeing, Pask’s sculptural installation of televisions called Colloquy of Mobiles, and a light-sensitive owl.
It was unlike anything that had been seen before and it cracked open the world just a little bit.
Amid the light and noise and spectacle, there was a series of prints created through computer programs and printed on large-scale plotters. One of them, entitled Return to Square, might be the most beautiful thing I have ever seen a computer make, certainly the most beautiful thing that was ever made using Fortran – the early IBM programming language. It features a square that slowly metamorphoses into a profile of a woman and then reverts to a square again: simple and striking.
The work came from a collective of artists calling themselves the Computer Technique Group (CTG) – the lone Japanese exhibitors to feature at Cybernetic Serendipity. Initially founded in 1966, CTG’s earliest members included Masao Komura and Kunio Yamanaka. Return to Square was derived from one of Komura’s ideas, and the Fortran programming was undertaken by Yamanaka. It was printed on a Calcomp drum plotter at the now defunct IBM Scientific Data Centre in Tokyo.
Referred to variously as radicals, electronic hippies and even the new samurai, CTG created new forms of graphic art, digitally produced poetry and computer-generated music – all of which they sent to Cybernetic Serendipity. It was a fitting mix, given that the creative process used by CTG depended on a combination of ‘cybernetic’ generation of patterns combined with the ‘serendipity’ of randomness. CTG clearly had their own cybernetic vision: one that was relational, involving humans and society, never purely technology. Their manifesto, which appeared in the program for a Computer and Art symposium held at the Great Hall of Tama Art University in October 1967, makes clear their point of view:
We will tame the computer’s appealing transcendental charm and restrain it from serving established power. This stance is the way to solve complicated problems in the machine society. We do not praise machine civilization, nor do we criticise it. By a strategic collaboration with artists, scientists and other creative people from a wide variety of backgrounds, we will deliberate carefully [sic] the relationships between human beings and machines, and how we should live in the computer age.
This stance is perhaps unsurprising given that the founding members were architecture, product design and engineering students at a time when Japanese student activism was at a peak. CTG stayed together for slightly more than three short years, during which time they pushed computing (further) into the realm of creativity and art.
After London, Cybernetic Serendipity was boxed up with other works and sent to Washington, DC to be installed in the Corcoran Gallery of Art. From there, a smaller subset travelled to San Francisco, helping Frank Oppenheimer to launch his new science museum, the Exploratorium. Fifteen years later, he called Cybernetic Serendipity ‘a most important beginning for our place. It really set the stage for the kind of work we wanted to do because it combined perception, art, technology and science in a wonderful way.’ The exhibit opened in the waning months of 1969 and was closed before the new year. To this day, the head of John Billingsley’s robot Albert guards the entrance of the building. And the impact of the exhibition still holds a particular place in the ways we imagine the past, and the past’s imaginings of a different future.
IT WASN’T THE first time people in San Francisco had encountered the idea of cybernetics, or the first time that a future was cast in which technology and human life might co-exist. It wasn’t even the first time that art had been used to evoke this technological future. A full two years before Cybernetic Serendipity reached San Francisco, Richard Brautigan wrote a poem about the future, one that still ripples through the present. He was, by the late 1960s, already a well-established West Coast author of poems, short stories and novels. He often wrote about the natural world and about human relationships with that world. But this poem, with its combination of technology and nature, found wider circulation. In particular, the eponymous line – ‘all watched over by machines of loving grace’ – made its way into Silicon Valley folklore, turning up in the various histories of that place and its founders.
My copy of this poem is printed on a page torn from some kind of manual. You can still see the faint imprint of a technical specification diagram, and it unfolds in typeset:
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
Was it an invocation in 1967? A hopeful request to the makers of the future? A year later, it wasn’t exactly spinning blossoms that the world saw, but at a live demonstration at the combined Association for Computing Machinery and Institute of Electrical and Electronics Engineers annual meetings in San Francisco, the future peeked through again. Over a ninety-minute period, Doug Engelbart, an electrical engineer at the Stanford Research Institute (SRI), and his team (which included Whole Earth Catalog’s Stewart Brand as the cameraman) would showcase a suite of technologies called ‘on line computing’ – including word processing, version control, a file-linking structure, real-time collaboration, hypertext, graphics, windows and a mouse. Engelbart was hugely interested in how computing technology could augment human intelligence and collaboration rather than building AI; he constructed something we now recognise as the personal computer to help make that distinction comprehensible.
It was a moment when the future of computing was suddenly clearly visible, and for almost a thousand people gathered in the room that day, it was a future they wanted to go off and inhabit. You can still watch this demo on the internet – the past of the present and the future, right there.
On 26 October 1969, the future showed up again when the American phone company AT&T connected two computers – one in Engelbart’s world at SRI and one at the University of California, Los Angeles (UCLA), about 570 kilometres apart on the West Coast. At UCLA they started typing the word ‘login’, asking SRI to report each letter as it appeared.
‘Do you see the L?’
‘Yes, we see the L.’
‘Do you see the O?’
‘Yes, we see the O.’
Then UCLA typed the letter G and the system crashed. Somehow, fittingly, that was the start of the internet.
In San Francisco, Cybernetic Serendipity was at the Exploratorium; I like to imagine that Doug Engelbart went there and saw another future.
More than fifty years have passed since Cybernetic Serendipity and the internet collided in California, and whole worlds have been built out of that intersection, and out of the imaginings, silences and visions of the people who gathered there. Many of us have inhabited those worlds and would rightly ask a lot of questions of them.
For me, I lived nearly thirty years at that very intersection in Silicon Valley, most of it spent in companies that were born out of those moments in 1968 and 1969. The cybernetic meadows and forests of Brautigan’s imagination have not been realised, and the machines that watch over us now seem to lack loving grace. The AI that was promised in 1956 has not emerged, and technological revolutions have not led us to transcendence or a whole-Earth point of view. According to a 2018 news feature by Nicola Jones for Nature, the world’s data centres consumed in excess of 200 terawatt hours of electricity each year – this is more than the consumption of some whole countries and represents 1 per cent of global electricity demand. The same report estimates that the entire information and communications technology ecosystem – ‘including personal digital devices, mobile-phone networks and televisions’ – generates emissions equating to 2 per cent of global emissions, putting this sector on a par with the international aviation sector. And the internet? Well, enough said. Still I am haunted by those earlier possible futures, and the worlds people imagined they could build. And now as we think anew about building into the future, I wonder what could be our touchstones and reference points.
When I returned to Australia in 2017, I wanted to build other futures and to acknowledge the country where my work had started and where I was now working again. I knew I needed to find a different world and a different intersection, and to find new ways to tell stories of technology and of the future – I wanted some different pasts and some different touchstones.
I first saw a photograph of the Brewarrina Aboriginal Fish Traps in a Guardian news article, and the image stayed with me.. That black-and-white photograph from the late 1800s showed long, sweeping lines of grey stones arcing across a fast-moving river. The water flowing around the lines of stones was tipped white at the breakpoints. And although there was no one in the image, the arrangement of the stones was deliberate, human-made and enduring. It was a photograph of the one of the oldest known human-built technical systems on the planet. And while there are ongoing debates about its exact age – 4,000 years, 10,000 years, 40,000 thousand years – there are no arguments about its complexity or sophistication.
It was December 2018 and a familiar Australian summer day – hot, windy and relentlessly dry – when I found my way to the banks of the Barwon, near the New South Wales and Queensland border, on the lands of the Ngemba people, to visit the fish traps. The ground was hard and dry and very brown: we were still in drought in 2018. There were few signs or directions, and nothing to suggest the importance of where I was. It did not look much like the photo either; the water was brackish and slow moving, and weeds choked the river in a swathe of startling green.
But you could still see the arc of stone nets stretching down the river from a modern concrete weir – and the sheer scale of the work was extraordinary. Given that many rocks had been taken from this riverbed and put into the foundations of nearby buildings, or cleared to make room for paddle steamers, this is a much-shrunken version. Still you have marvel at its size, and wonder where all the rocks came from, and how they were all moved to this place, and how long it must have taken to make, and why it wasn’t mentioned in the histories of Australian engineering and technologies we learnt at school.
These dry-stone fish traps are certainly the oldest and largest system of their kind in Australia. Known to the local traditional owners and custodians as Ngunnhu, their patterning was revealed by an ancestral figure named Baiame to his sons. Generations of Aboriginal people have shaped these stones into loose curves stretching down the river, mimicking fishing nets, allowing fish to be trapped in stone containers at different heights of the river. There were also pens with stone walls to keep fish – big and little – in clear, cool running water. This was a meeting place, a place where multiple different Aboriginal nations gathered, where ceremonies and ritual and knowledge were established and shared. It is still a significant and special place, and the local Aboriginal community continue, when they can, to fish there. The traps were added to the NSW State Heritage Register in 2000 and the National Heritage List in 2005. Standing on the banks of the Barwon, I came to think that the importance of this place was not about the traps per se. It was about the system those traps create, and the systems in which they are, themselves, embedded. This is a system thousands of years in the making and keeping. This is a system that required concerted and continuous effort. This was something that required generations, both of accumulated knowledge about how the environment worked and accumulated knowledge about hydrology and about fish, and an accumulated commitment to continuing to build, sustain and upgrade that system over time.
The technical, cultural and ecological elements cement the significance of this place, not only as a heritage site but as a knowledge base on which contemporary systems could be built. Ideas about sustainability; ideas about systems that are decades or centuries in the making; ideas about systems that endure and systems that are built explicitly to endure. Systems that are built to ensure the continuities of culture feel like the kind of systems that we might want to be investing in now. This feels like the outline of a story of the future we would want to tell.
Silicon Valley, where I’ve spent a significant part of my career so far, is a place where the stories of past futures and their technologies are made and remade, and where many pieces of those pasts are erased or rewritten or just forgotten; where stories of the future are told all the time.
Now, we need to make a different kind of story about the future. One that focuses not just on the technologies, but on the systems in which these technologies will reside. The opportunity to focus on a future that holds those systems – and also on a way of approaching them in the present – feels both immense and acute. And the ways we might need to disrupt the present feel especially important in this moment of liminality, disorientation and profound unease, socially and ecologically. In a present where the links towards the future seem to have been derailed from the tracks we’ve laid in past decades, there is an opportunity to reform. Ultimately, we would need to think a little differently, ask different kinds of questions, bring as many diverse and divergent kinds of people along on the journey and look holistically and critically at the many propositions that computing in particular – and advanced technologies in general – present.
For me, the Brewarrina Fish Traps are a powerful way of framing how current technological systems should and could unfold. These present a very different future, one we can glimpse in the present and in the past; one that always is and always will be. In this moment, we need to be reminded that stories of the future – about AI, or any kind – are never just about technology; they are about people and they are about the places those people find themselves, the places they might call home and the systems that bind them all together.