The future is hackable

Apocalypse and euphoria in a deepfake world

Featured in

  • Published 20230207
  • ISBN: 978-1-922212-80-1
  • Extent: 264pp
  • Paperback (234 x 153mm), eBook

How we move forward in the age of information
is gonna be the difference between whether we survive
or whether we become some kind of fucked-up dystopia.

Obama deepfake, 2018


WHICH OF THESE is true? An onscreen Salvador Dalí posed for selfies in a Florida museum; Volodymyr Zelensky urged Ukrainians to surrender three weeks into the Russian invasion; the judges of America’s Got Talent, none of whom sing opera, performed Puccini’s ‘Nessun Dorma’; David Beckham broadcast an anti-malaria appeal in fluent Arabic and Mandarin; Snoop Dog read tarot cards on a psychic TV hotline; Obama publicly labelled Trump a ‘complete dipshit’; Mark Zuckerberg boasted ‘whoever controls the data controls the future’; and Kim Kardashian confessed she loves ‘manipulating people online for money’.

All are true in the sense that they happened. All are also fake: none of the people featured in these videos actually said, or did, these things. This post-truth era paradox is possible thanks to the astonishingly deceptive capabilities of deepfake technology, which uses Generative Adversarial Networks (GANs) to create credible ‘real-world’ audiovisual content that is, in fact, AI-generated illusion. 

Initially the toxic plaything of incel misogynists incubated in the dark underbelly of the web, deepfakes emerged on Reddit in 2017 when an anonymous user pasted the faces of Gal Gadot, Taylor Swift and Scarlett Johansson onto the bodies of female actors in pre-existing porn videos. Since this insalubrious debut, deepfake technology has evolved at lightning speed: it is now possible to create convincing deepfake videos from a single photograph, and for deepfake simulations to converse with real people on Zoom. Deepfake apps are widely available and easy to use, and the wealth of data on our social-media feeds means that ordinary people – former partners, business rivals, local politicians, your next-door neighbour – are as likely to be targeted by deepfakes as celebrities. 

For many AI-developers, deepfakes are just one tool in the increasingly sophisticated synthetic media arsenal revolutionising education, science, entertainment and commerce: part of a ‘golden decade’ of accelerating deep learning in which computers – if Blake Lemoine, the AI engineer who was sacked for claiming Google’s LaMDA chatbot had feelings, is to be believed – have already developed sentience. Elon Musk has warned humanity is ‘summoning the demon’ by working with AI and may ‘do something very foolish’ without regulation. Futurist Ray Kurzweil prophesies we’re headed for an ominous ‘Singularity’ in which machines ‘will surpass human intelligence’, rupturing ‘the fabric of human history’. Stephen Hawking predicted AI would either ‘infinitely help’ or ‘destroy’ us. Deepfakes occupy a central place in this dystopian imaginary: by removing the gold standard of evidentiary truth, the non-fiction video, and in fulfilling the ‘realism heuristic’, which predisposes us to trust visual representations over written ones, deepfakes have turbo-charged fake news. They disrupt established assumptions about screen ‘truth’ so successfully that even filmmakers (myself included) have difficulty detecting them. 


I FIRST ENCOUNTERED deepfakes in 2018, when filmmaker Jordan Peele puppeteered Obama in a piece-to-camera on YouTube, warning viewers we are ‘entering an era in which our enemies can make it look like anyone is saying anything’. Having used digital effects to explore the tenuous fact/fiction boundary in multiple documentaries, from the animated manipulations of Japanese otaku in Hell Bento!!, to hoax-author Norma Khouri’s labyrinthine deceptions in Forbidden Lie$ and the celluloid propaganda of North Korean filmmakers in Aim High in Creation!, my curiosity was piqued. It became an obsession in 2021, when extraordinarily convincing Tom Cruise deepfakes began appearing on TikTok. A collaboration between Cruise-impersonator Miles Fisher and Belgian VFX engineer Chris Ume, the TikTok Cruises went viral, reaching eleven million views within a week. To create them, Ume pasted AI-generated data of Cruise’s face onto Fisher’s body, enabling Fisher to ‘drive’ Cruise with his physical and oral performance. 

Touted by the media as ‘the most alarmingly lifelike examples so far of the high-tech hoax’, the Ume-Fisher deepfakes were a dazzling testimony to the speed at which AI had evolved. Unlike earlier deepfakes such as Peele’s, which featured talking-heads in locked-off frames to eliminate illusion-shattering camera movements AI had yet to learn to control, the Cruise TikToks were shot handheld in interior and exterior locations, shifted between close-ups and wides without edits and used foreground props. Their single-take amateurism heightened their authenticity: Cruise licked a lollipop, fell over a podium, cleaned a kitchen, danced to George Michael’s ‘Freedom!’ and, most spectacularly, vanished a coin, pronouncing ‘magic is the real thing’. I was stunned by the verisimilitude of the deception, and my inability to spot it. 

Deepfakes point to a future that is simultaneously euphoric and apocalyptic: philosophers have positioned them as ‘an epistemic threat to democracy’, journalists have called them ‘the place where truth goes to die’, futurists have portrayed them as the digital harbinger of a mass ‘reality apathy’ in which even video will be a lie. But for artists like Ume, and the growing network of VFX enthusiasts who create and share deepfakes in a dynamic interplay of performance and spectatorship, the technology is just a cool new toy to be harnessed for illusionary one-upmanship and fun. 

These radical oppositions suggest equally contradictory possibilities. Are deepfakes really the end of truth as we know it? Or simply an innovative special effect, which, like the panoply of audiovisual trickery throughout film’s evolution – from the analogue illusions of nineteenth-century cine-magician Georges Méliès to the docu-fakery of Orson Welles and the digital avatars of James Cameron – can be used for good or evil, depending who’s pressing the buttons? Who are the scientists, artists, propagandists and criminals making deepfakes, and who is consuming them? Can deepfakes be accurately detected? Does awareness we are watching a deepfake affect its ability to persuade us? When are deepfake viewers unwitting dupes and when are they knowing spectators? What are the ethical implications of using real people to create fake videos? Are deepfakes inherently malicious, or does moral responsibility rest with their designers and viewers? To repurpose an insidious pro-gun mantra – do deepfakes deceive people, or do people deceive people? Is it possible to still ask these questions as synthetic media systems rapidly approach a state of ‘full’ AI, in which computers will be able to think for themselves?  

To unpack this conundrum it is worth noting the uncertainty deepfakes generate is not new: history is riddled with alarmist prophesies about emerging technologies that did not live up to the hype. The mass-produced eighteenth-century novel generated anxieties about addicted readers; the patenting of light bulbs inspired fears of blindness; cinema prompted gatekeepers to denounce it as ‘commercialised voyeurism’ and the death knell of theatre; the radio instigated worries about weakened ‘social morality’; and television was pilloried for encouraging violence. Coupled with a pervasive belief that technology is an unstoppable, autonomous force, the emergence of the internet, and its subsequent enhancements (search engines, social media, interactive gaming), generated similarly grim (but perhaps more accurate) predictions of technology-addicted users, with games causing ‘aggression’, smartphones causing ‘depression’ and social media causing a rise in narcissism and psychopathy. In 2007, ten years before deepfakes appeared on Reddit, the ease with which digital technology could manipulate reality was already creating concern that analogue markers of truth would, as film theorist Mary Ann Doane put it, lose their ‘credibility as a trace of the real’, and legacy media faced a ‘crisis of legitimation’ in which its ‘referential grounding’ would collapse. 

The credibility of video now seems quaintly old-fashioned, the nostalgic artefact of a more innocent time. As I write this, my smartphone dings with fresh alerts about the digital mischief perpetrated by deepfakery and its proliferating AI cousins: DALL-E, which creates artworks from text prompts; GPT-3, which writes screenplays to order; CogVideo, which delivers synthetic films from one-sentence synopses; Deepnude.to, which gives users X-ray vision, enabling them to ‘nudify’ pictures of women they know. An AI-generated artwork has won first prize in a Colorado fair; a TikTok ­influencer has deepfaked himself to convince followers his entire online persona is fake; and FN Meka, a Black cyborg created by two non-Black ­musicians, has been dumped by Capitol Records for a racially offensive deepfake showing him being beaten by White cops. Right now, a man is deepfaking a friend, a relative, a stranger he filmed on the bus, into a public sex video of his choice – without her knowledge.

Everywhere I look, deepfakes are being weaponised. The conflict between the beneficial potential of synthetic media and the corrosion of evidentiary, ethical and civil standards caused by AI’s expanding capabilities is intensifying. The following deep dive into dystopian and utopian predictions about deepfakes, and the extent to which they have come true (assuming ‘truth’ is still a legitimate term), is cloaked in the cognitive life-jacket of Amara’s Law: we ‘overestimate the effect of a technology in the short run and underestimate the effect in the long run’. I hope, as we wade through the pixelated currents and dissolving truths of the synthetic media sea, an answer to the crucial question underlying deepfakery – will the power of screen deception ultimately belong to humans or machines? – can be glimpsed. 

One caveat: this might have been written by AI. It’s impossible to know.


DEEPFAKES OCCUPY THE epicentre of an escalating tension between fact and belief in the digital media economy. Their rise coincides with the emergence of video as the preferred information format for the majority of consumers. The text-driven churn of Twitter now vies with the visual distractions of TikTok: recent studies indicate that users retain 95 per cent of audiovisual messaging and only 10 per cent of messaging read as text. In a post-truth arena already besieged by the comparative virality of fake stories over real ones, with public faith in ‘facts’ and ‘expertise’ eroding across the legal, political, academic and media spheres, deepfakes, in their five-year life span, have inspired dystopian prophesies that swerve from dread to moral panic. In 2019, US Congressman Adam Schiff warned deepfakes could ‘turn a world leader into a ventriloquist’s dummy’. Political researchers Cristian Vaccari and Andrew Chadwick concurred, asserting the ‘stakes are too high’ for deepfakes to be treated as ‘mere technological curiosities’. In the lead-up to the 2020 US election, computational scholars modelled seven credible deepfake scenarios that could undermine democracy. Mainstream media took up the charge, branding deepfakes as ‘weaponised disinformation’ on a ‘catastrophic scale’, the fiendish heralds of a looming ‘infopocalypse’ that, left unchecked, would generate a ‘perfect storm of misinformation’, wherein our inability to distinguish truth from trickery would damage civic society, leading to the collapse of reality.

The deepfake nightmare is one in which our fate is ‘hackable’ as malicious actors scrape the web for data, flooding social media with video hoaxes designed to manipulate elections, swing markets, embezzle corporations, implant false memories, sabotage court testimony, wage espionage, promote conspiracies, disseminate propaganda and blackmail ordinary people. It is a world in which video could become ‘the biggest lie of all’, generating an existential crisis in which citizens ignore the news altogether or exercise the liar’s dividend, Trump-style, by dismissing genuine recordings (lest we forget the grab-them-by-the-pussy tape) as ‘fake news’. 

Deepfakes also trigger a cascading collapse of journalistic truth-markers: corroborating a story, technology researcher Mika Westerlund cautions, may not be possible because deepfakes cruel the assumption that ‘whatever is said in public, a real person has said it, even though the statement may be false’. Videos may no longer be fact-checkable because deepfakes can be generated from real footage. Media literacy, which advocates identifying a video’s provenance to gauge its veracity, could also be unravelled by deepfakes, which are often undetectable to the human eye and constantly repurposed and shared, obscuring their original source. The greatest danger of deepfakes, Westerlund concludes, is not their ability to deceive us but that ‘people will come to regard everything as deception’.

This techno-paranoia is fuelled by the fact that deepfake detection programs can be harnessed to produce more convincing deepfakes in an AI-reboot of Frankenstein’s monster. The accelerating deep-learning arms race has seen digital forensics researchers repeatedly outgunned, with the most accurate detection models, like those generated for Facebook’s 2020 Deepfake Detection Challenge, only able to identify deepfakes 65.18 per cent of the time. The simulative power of synthetic media is considered so malignant by some US and EU legislators that they’ve enacted laws to control it: not since legal actions against Google Glass and the VCR has a screen technology inspired such a punitive juridical response. The apocalyptic endgame of deepfakery is envisaged as nuclear war, catalysed by a convincing forgery of a world leader threatening to drop the bomb – or doing so. 


THESE PREDICTIONS ARE chilling, but how many, to date, have come to pass? It is, in fairness, too early to confirm claims deepfakes will destroy ‘truth’ as we know it: despite the speed at which AI is evolving, such a cataclysm would take time to unfold and, if it did occur, reductio ad absurdum, there would be no mechanisms left to confirm it. Deepfakes are also yet to kill off American democracy: apart from Ume’s 2020 satirical deepfake Run Tom Run showing Cruise running – literally – for president, the fake news disrupting US elections since 2016 predominantly uses conventional image and video manipulations, not AI. 

Outside the US, however, deepfakes are being used to subvert political discourse, disseminate propaganda and wage espionage. In 2018, Belgium’s Socialist Party released a deepfake of Trump declaring, ‘I had the balls to withdraw from the Paris climate agreement…so should you’, alarming party members who thought it was real. In 2019, a broadcast by Gabon President Ali Bongo Ondimba instigated civic upheaval when his static gaze convinced opponents he’d released a deepfake to hide his ill health, leading to an (unsuccessful) military coup. In 2020, video ‘sock puppets’ (synthetic humans generated from deepfake photos) appeared on a Zionist Facebook page, claiming to be left-wingers compelled to support Israel’s conservative Prime Minister Benjamin Netanyahu. Delighted users circulated the deepfakes on far-right sites, unconcerned by their provenance. In 2021, Leonid Volkov, Russian opposition leader Alexei Navalny’s chief of staff, supposedly conducted video calls with several EU officials who later realised they had been discussing sensitive diplomatic issues with an AI, prompting speculation the deepfake Volkov was a digital spy. In 2022, pro-Russian agents struck twice: in March, with Zelensky’s deepfake surrender, and again in September when a deepfake of Kyiv Mayor Vitali Klitschko met EU mayors online, claiming Ukrainian refugees were cheating the German welfare system and demanding their deportation back to Ukraine.

Deepfake researcher Hany Farid sees such deceptions as the ‘the tip of the iceberg’ in a burgeoning information war deploying deepfakery to achieve lucrative political and commercial goals. Deepfake crime is flourishing: in 2019 fraudsters deepfaked phone-audio of a German CEO to extract US$243,000 from his UK subsidiary; in 2021 grifters combined deepfakes with fake emails to convince a UAE company employee to transfer US$35 million to their account. The 2022 Black Hat cybersecurity conference report shows 66 per cent of cyberattacks now involve deepfakes. Deepfake human-rights abuses are also mushrooming globally, fulfilling predictions the technology will be used to attack and disempower citizens online: in 2018, Indian hackers deepfaked Washington Post journalist Rana Ayyub into a widely circulated porn video after she demanded justice for an eight-year-old Kashmiri girl who had been raped and murdered. Ayyub joins a line of high-profile professional women stripped and degraded by deepfake porn: UK poet and broadcaster Helen Mort; US politicians Lauren Book and Alexandria Ocasio-Cortez; Australian lawyer Noelle Martin. Thousands of ordinary women and girls are being similarly abused: data collated by Sensity AI indicates malicious deepfakes double every six months, and of the 85,047 deepfakes circulating by 2020, 90 to 95 per cent were non-consensual porn, 90 per cent of which targeted women. At the time of writing, these figures remain largely unchanged. Tellingly, while considerable attention and resources have been devoted to the technological and legislative prevention of political deepfakes (with dubious success), the fact that the overwhelming majority of deepfake victims are female remains insufficiently acknowledged and regulated. Microsoft, despite pledging to remove its DeepNude app in 2019, still hosts the original AI source code. From the shadows of the web, toxic men continue to attack women with impunity.


AS A NASCENT technology, deepfakes are in what media theorist Simone Natale labels the ‘interpretive flexibility’ phase where different stakeholders compete ‘to impose a specific meaning on the novelty’. Utopian deepfake predictions separate the technology from the intentions of its users, depicting a positive and transformative future that incorporates two ‘emotional forces’ bioethical researcher Emilio Mordini identifies as integral to the harmonious integration of all new technology: curiosity and wonder. Freed from reductive, dystopian portrayals of deepfakery as the digital bastard of malicious hackers and fake news, the technology can be viewed neutrally – as a novel special effect in film’s ongoing quest for verisimilitude. Locating deepfakes in the illusionist traditions of cinema and magic, which trick the gaze but not the mind, widens the focus from machines that produce and control deepfakes to the humans who construct and circulate them. Deepfake creators – just as early technology-adopters have always done – are now repurposing this powerful tool to enrich and improve our lives. Their work illuminates an expanding site of cultural and social innovation, suggesting deepfakes might not, after all, be ‘the place where truth goes to die’.

Psychologists envisage using deepfakes to de-age the relatives of Alzheimer’s sufferers to strengthen memory bonds and to help gender-reassignment and body-dysmorphia patients imagine their future selves. Grieving families are already finding solace in deepfakes of the deceased – most notably in 2020 when Kim Kardashian viewed a talking hologram of her dead father. Communications researchers are developing deepfake personal ‘assistants’ (the video equivalent of Siri); deepfake ‘skins’ that enable multilingual exchanges online; deepfake ‘mannequins’, which can be used to try on virtual outfits; and deepfake ‘influencers’, such as Lil Maquela, a progressive Augmented Reality robot with 3.6 million followers. In entertainment, deepfakes participate in live TV debates, awards nights and comedy shows; fans deepfake themselves into movies and games; and musicians are being ‘youthified’ in concerts – such as Abba’s 2022 ‘Voyage’ extravaganza, in which the Swedish band performed as their ’70s digital ‘ABBAtars’. In education, deepfakes are being used to speak truth to power: Joaquin Oliver Comes Back to Life (2020) resurrects a shooting victim in a posthumous video, urging American voters to stop gun violence.

In cinemas, galleries and the YouTube showground, artists are using deepfake technology to provoke, subvert and astonish, fulfilling a perceptual contract as old as magic itself – in which our wonder springs from knowing we’ll be deceived, but still being surprised when we are. These artists treat deepfakes as a new VFX in a screen lineage rewinding back through the digital (green screen, motion-capture) and the analogue (rear-projection, superimposition) to cinema’s magical origins in the trick films of the ‘Godfather of CGI’, Méliès. 

But deepfake artistry also responds to a more ancient dream, the human automata, whose ancestors are glimpsed in the mechanical puppets of ancient Greece, the tea-pouring robot of twelfth-century engineer Ismail al-Jazari, Wolfgang von Kempelen’s eighteenth-century chess-playing android and
Peter Jackson’s twenty-first-century cyborg Gollum in The Lord of the Rings. In 2020 the synthetic human was given new form by deepfake artist Shamook, who created a full-bodied, de-aged Mark Hamill in Luke Skywalker Deepfake, which was so superior to the ‘dead-eyed’ CGI Luke of Disney’s The Mandalorian that Lucasfilm offered Shamook a job. Shamook belongs to an expanding community of practitioners and fans who circulate deepfakes with a playful cinephilia that rewards excellence (Princess Leia Fixed using Deepfakes), humour (Elon Musk Star Trek Deepfake) and believability (Keanu Reeves stops A ROBERRY!). Deepfake automata, such as the Skolkovo Institute’s talking portrait of the Mona Lisaand the Florida Dalí Museum’s Dali Lives, exhibit a similar fascination with spectacular AI simulacra. The technologically literate and knowing viewers these works attract undermine dystopian predictions of stupefied audiences rendered powerless by deepfake deception. 

Some political artists are harnessing this deception to reveal deeper truths: Stephanie Lepp’s Deep Reckonings (2020) reimagines polarising figures, such as US Supreme Court Justice Brett Kavanaugh, as their ‘morally courageous selves’; In Event of Moon Disaster (2019) subverts the fake moon-landing conspiracy with a deepfake of Nixon delivering a real speech, written in case Apollo’s 1969 mission failed; James Coupe’s Warriors (2020) harnesses deepfakes and facial recognition software to critique the in-built bias of AI.  

For mainstream filmmakers AI deception has a more utilitarian function, with synthetic media tools poised to vastly reduce editing, grading, special effects and production costs. Deepfakes can substitute deceased or absent actors with convincing simulations, dub their voices in perfectly lip-synched foreign languages and replace jarring documentary re-enactments with seamless archival and dramatised scenes. Ethical concerns around the use of AI trickery in non-fiction film are growing: in Roadrunner (2021),director Morgan Neville used deepfake audio of the deceased Anthony Bourdain to make him speak words he’d never said. David France’s Welcome to Chechnya (2020), on the other hand, used deepfakes to disguise refugees from Chechnya’s anti-gay purges. In a captivating illustration of AI’s ­expressive power, the deepfake ‘skins’ worn by France’s subjects kept them safe, were more engaging than conventional pixelation and operated as a cinematic metaphor for the masks LGBTQI+ people must often wear in the face of violent discrimination. 

It is clear from these works and the contradictory narratives proliferating around deepfakes that the synthetic media horizon is still expanding. It will soon be possible to create an entire feature film, in any genre, using nothing but keyboard prompts. The screen industry is alight with predictions AI will put filmmakers out of work, but I’m ambivalent. Art shows us what it is to be human. As long as AI’s mechanical imposters are trained in the Silicon Valley bubble, their stories will be neither diverse nor original – once the novelty of watching computers ‘do art’ wears off. The prevention of malicious deepfakery needs to focus on its abusers and victims, not the technology: in our AI-enhanced future, ethical and legal regulation is vital. As deepfakes become more embedded in our on-screen lives, it is likely the dystopian stench around them will dissipate and a more nuanced understanding of the technology, as one of a suite of synthetic media tools that can cause harm or happiness, will emerge. 

For now, and perhaps not much longer, the moral responsibility for deepfake deception rests not with AI but with us. One thing is already true: as the current zenith of screen-illusionism, the creative power of deepfakes is limitless. 


Based on the author’s research for ‘Deepfake Nightmares, Synthetic Dreams: A Review of Dystopian and Utopian Discourses Around Deepfakes’, Journal of Asia-Pacific Pop Culture, Vol. 7, No. 1: 109–135 (Pennsylvania State University Press, 2022).

 

Share article

About the author

Anna Broinowski

Anna Broinowski is a Walkley Award-winning filmmaker and author who documents counter-cultural subjects. Her films include Aim High in Creation! (about North Korean cinema),...

More from this edition

Detachable penis

Non-fictionThree years into my transition during the 2021 lockdowns, my online shopping habit became a full-blown addiction. One of the weirder things that I purchased was a petite crocheted penis and testicles, hand-stitched by a crafty ‘bear’ called Devon. Each package was made-to-order, so I could choose everything from the shaft length to colour and testicle size. I could’ve even added ball hair. 

Tell me a story

Non-fictionAs QAnon members circulated their vernacular and practices across social networks, their acts and ideas became increasingly visible, and individuals began to recognise the behaviour as sanctioned, expressive acts within their community. In other words, adherents of QAnon began to recognise and conform to their very own folklore – one that explained who they were and described how they should act in given situations.

Stay up to date with the latest, news, articles and special offers from Griffith Review.