A great experiment

Finding sanctuary for attention in the digital world

IT’S EASY TO get lost in the disruption: our obsession with technology and how to regulate it; minimise our dependence; manage our kids’ screen time. On a personal and societal level, we find ourselves squaring a circle – ever more reliant on our devices and 24/7 connection, yet increasingly warned that these same devices are doing us harm. Our desire to understand and control the effects and excesses of the use of technology generally manifests most strongly in relation to others. This contradictory dynamic plays out with regards to children in particular. How to justify almost constant personal technology use against the responsibility to set a good example? It’s easier to make a parental call than deal with a potential personal addiction. But different rules for us and them highlight the serious nature of the problem gnawing away at our attention. If we ever take a moment to question the extent of our own distraction, we realise the challenge we’re facing. As the scale of surveillance-capitalist creep becomes apparent in many aspects of our lives, we are caught between an abiding sense of the need to step back and a compelling pull to dive in deeper. It’s far easier to yell advice to someone else on the precipice (especially if they’re a kid) than to make the decision ourselves.

I recently took one small step backwards: I bought a battery-powered alarm clock. Consciously removing a screen from the last and first moment of my waking hours was a plan months or even years in the making. It’s early days, but at the edge of my consciousness there seems to be a gentle unwinding. Underlying my decision, though, is something more fundamental. As we come to terms with the information revolution and the socio-economic upheaval it has created, worry is growing about a potential societal de-tethering from meaningful social connection. Persuasive technology – the fact that your phone and its apps operate like a poker machine with a variable reward schedule honed by armies of psychologists employed by Big Tech companies – is pushing us to lead our lives differently. Differently, in this case, means in service to those companies and their customers (advertisers), not to our own desires or values. This should not surprise us. As we willingly partook in the extinction of privacy in exchange for convenience, we had until recently been blind to the mutation of our social and professional activity online into a shadowy yet vast commercial operation. The surveillance-capitalist business model of Facebook, Google, Amazon and others is built on the assumption that a random and generally unwelcome collection of data points about some of your interests, online social interactions and consumption habits equates to an exact replica and predictable model of you as a person. This diminution of the self is then fed back to us in a reductive and reinforcing tailored experience (while being sold for massive profit). Understood in this light, there is a sense that there must be something more to us all that the algorithms and the developers are not quite capturing. Something more that is not being allowed to flourish.


NOWHERE IS FLOURISHING more important than at school. The New South Wales government banned phones in state primary schools from 2019. Following its lead, the Victorian government announced a ban in state primary and secondary schools from 2020. Victorian Minister for Education James Merlino’s press release states the ban will ‘help reduce distraction, tackle cyberbullying and improve learning outcomes for students’. But will it? As a marker of a shift in cultural norms it is certainly important. Clear policy deals a stronger hand to schools and teachers in an already challenging environment. The message to students is unambiguous: your phone undercuts your capacity for attention (and worse). Practically, though, most students at secondary school in particular have other devices in the classroom that provide access to everything a phone does. Cyberbullying will continue after 3.30 pm. As ever, the real solution is far more complex – it will take a lengthy and negotiated process of guidance, boundary setting and open conversations. That sounds like parenting. It also sounds like a more mature societal discussion on healthy technology use.

So how worried should we be about the impact of technology on our kids? Cultural diagnoses connect violence and mental health issues to increased screen time and social media use. This is of course a troubling thought when applied to children and teenagers. The brain’s increased plasticity during early childhood and adolescence makes young people especially susceptible to ill effects. On the basis of this fact alone, a great deal of anti-tech and anti-screen arguments are intuitively appealing.

There’s a busy market of quasi-parenting advice books out there claiming to boil the scientific evidence down and help people inoculate their children against any negative consequences of technology use. Foremost among them is Oxford University Professor Baroness Susan Greenfield – a neuroscientist who first spoke in the House of Lords about the infantilisation of the twenty-first century mind in 2009 – who was recently joined by Australia’s David Gillespie, whose Teen Brain (Pan Macmillan) was published in 2019. Since her first intervention a decade ago, Greenfield has written Mind Change: How digital technologies are leaving their mark on our brains (Rider Books, 2014), and supported this publication with a host of op-eds and media appearances. Yet she has failed to publish any peer-reviewed research in this area in an academic journal, despite repeated calls from academic colleagues for her to provide evidence for what are alarming claims. Greenfield is a specialist in neurodegeneration, Alzheimer’s and Parkinson’s, not the adolescent brain, but her assertions around this topic have lamented the death of teenage empathy and literacy at the altar of new technologies. Whether it is computer games, social media, ‘entertainment’ or web browsing, Greenfield fears that connections could be detached in young brains, even causing temporary dementia. This ignores the fact that these activities and the devices associated with them are all, importantly, very different from one another. A 2015 editorial in the British Medical Journal about her work was scathing:

We are concerned that Greenfield’s claims are not based on a fair scientific appraisal of the evidence, often confuse correlation for causation, give undue weight to anecdote and poor quality studies, and are misleading to parents and the public at large.

A similar lack of rigour and penchant for hyperbole is evident in David Gillespie’s Teen Brain. The book’s contention relies frequently on trivialising the trials of drug addiction and demonising vulnerable drug users:

Less than a decade [after the release of the iPhone and then the iPad], our world is awash with addictive behaviours and our schools are the dealers. No one needs to commit crime to get the money needed for a hit…the new tools of addiction are purpose-built for the target audience and refined by market forces to be as effective as possible in addicting your kid.

As for nullifying the impact of ‘screens’, parents simply ‘need to harden up’ and they’ll be able to ‘save their kids’. One piece of advice reads more like a tutorial in toxic masculinity: ‘With boys, wherever possible it’s good to have any punishment administered by an adult male they respect… Testosterone only understands one thing: brute force.’

While neither Gillespie nor Greenfield has produced material in line with the scientific method, the underlying concerns here are justified. Much neuroscientific, psychiatric and psychological research is underway into the impacts of digital technologies, social media and the internet on young and old minds alike. In a 2019 article in World Psychiatry, ‘The “online brain”: How the Internet may be changing our cognition’, an international team led by Western Sydney University’s Joseph Firth made the sobering observation that for ‘better or for worse, we are already conducting a mass-scale experiment of extensive Internet usage across the global population’. But this review of the evidence was more balanced in its assessment of the effects of internet use and internet-enabled devices on our cognition and mental health. There are certainly ‘cognitive consequences of the attention-grabbing Internet’, and the paper quotes a study in which ‘over 85 per cent of teachers [endorse] the statement that “today’s digital technologies are creating an easily distracted generation”’, but ‘the long-term effects have yet to be established’ and there are even some potential benefits.

In line with this article, what emerges from a broad sweep of the literature is a very complicated picture. There are certainly cognitive impacts from increased smartphone, internet or gaming use, but they differ widely and the trends are not all negative. Some findings are deeply counterintuitive: media multitasking (the behavioural pattern associated with lots of devices or apps stealing your attention) does lead to shallow attention and, oddly enough, decreased ability to multitask, but first-person shooter games increase multitasking performance, sustained attention and short-term memory. Hours spent watching TV have a small negative impact on academic achievement, until they have no impact at all – it depends whether you’re watching documentaries or The Bachelor, and your privilege and support network will likely make up for that ‘lost’ time anyway (or they’ll make it worse). Indeed, socio-economic status can prove a more important variable than technology, platform or app. Adverse effects on attention from heavy multimedia use seem notably strong in younger adolescents, less so for older teens; for older adults, diverse stimulation from online experiences can actually negate cognitive decline. Teens with high wellbeing tend to use social media positively, whereas those with low self-esteem or wellbeing tend to experience increasing rates of anxiety, depression and isolation – especially girls. Context is everything: age, gender, prior mental health, class, which device and what app or game or show, duration of use… For those wanting certainty, long-term displacement of other activities and healthy routines is a precursor to negative effects in adolescents. The clearest message is the importance of sleep. ‘Screen time’ might be something to negotiate, but screen time that disrupts bedtime has a flow-on effect on general health and wellbeing – and, therefore, the likelihood of depression and anxiety.

After reading all those meticulously researched journal articles, it depends is an unsatisfying answer. In science, it is a professional virtue to be averse to overstating findings and drawing causal links in place of compelling correlation. Yet some scientists are more willing to call a spade a spade than others. Jean Twenge’s 2017 piece in The Atlantic, ‘Have smartphones destroyed a generation?’, compiles generational psychological research into a deeply troubling profile of youth mental health. It’s hard not to be convinced. Twenge’s critics might argue there’s a need for more longitudinal research and less self-reported data. But if the self-reported data is worrying, shouldn’t we act on it?


WE’VE ALWAYS BEEN terrified about what we might do with technology, or what technology might do to us. Yet millennia of technological innovation has taught us that the future we fear is never quite the future we get:

What you have discovered is a receipt for recollection, not for memory. And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society.

This is Socrates, from Plato’s Phaedrus, written around 370 BC. Socrates wasn’t talking about the internet, or people on Twitter; he was worried about writing. Technophobia and its lesser forms have been around ever since.

What’s often forgotten is that humans design each new technology: to blame technology is to outsource our complicity in the potentially dystopian future we’re building for ourselves. Technology is a tool – it can be put to human use, or abuse. We misshape it at our peril. Our current political moment’s unintended consequences (Cambridge Analytica as flag-bearer) suggest we have made this very mistake. While moments like this do make us more circumspect – maybe we even (briefly) deleted Facebook – our mistrust of particular platforms after this and other scandals only rarely translates to actual behavioural change. Once more, convenience slowly trumps outrage.

American communication academic and cultural critic Neil Postman wrote Technopoly: The surrender of culture to technology in 1992 (Alfred A Knopf). He cautioned we must be more conscious in our approach to technology adoption:

Once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is – that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open.

Postman was writing six years before Google, twelve years before Facebook, fifteen before the iPhone and eighteen before the iPad. If our eyes weren’t open then, are they even less so now? Much like Postman, Canadian media theorist Marshall McLuhan was described as believing that ‘we shape our tools and thereafter our tools shape us’. Recent neurobiological research bears out this hypothesis. The Current Biology journal paper ‘Use-dependent cortical processing from fingertips in touchscreen phone users’ (2015) reveals that thumbing through Wikipedia or swiping right on Tinder has rewired our brains. The more frequently and intensely you use your touchscreen, the more your brain lights up – but the long-term impact of this reorganisation of brain activity is unclear for now.

The sales quote on the front of Technopoly reads, ‘a tool for fighting back against the tools that run our lives’. Yet it often isn’t the tools we should worry about. Technology is simply a warped mirror, amplifying the best and worst in us. Consider Tay, a Microsoft AI chatbot released via Twitter in 2016 that turned racist and was shut down in just sixteen hours. There was nothing inherently racist in Tay’s programming – rather, a co-ordinated bunch of bigots tweeted so much vile content at the AI account that Tay very quickly internalised and imitated their language and behaviour.

There are two fruitful responses to missteps such as Tay. One is to deal with the ethics of the technological design process and of the humans behind it. This means questioning the underlying incentives hardwired into surveillance capitalism itself. Shocking content keeps us online longer, plain and simple – and the design of websites such as YouTube in particular monetises the attention of young people in a profoundly troubling way. The other response is to question the heart of the culture.

But neither of these paths is generally taken. Instead, we either blame the technology itself, as if it is hermetically sealed off from its context, or we blame the individual and make this a problem of self-control. By failing to ask systemic questions of the technology industry or interrogating stubborn prejudices in society, we let the real culprits off the hook.


TECHNOLOGY IS OMNIPRESENT. That statement holds in the classroom, the workplace and every public space. What is challenging is differentiating its positive and negative effects. Internet-enabled devices and social media platforms can be incredibly emancipating for those who are often isolated in our society (people with disability, or those living in remote and rural areas, for example). Those without a community can find one and the sense of belonging that comes with it. These communities can also turn vile and deadly: before the Christchurch and El Paso Walmart massacres and the Poway synagogue shooting, each perpetrator posted a hate-filled manifesto on racist message board 8chan. Ultimately, this is a culture question. The tools for spreading poisonous ideology have changed. Action to physically and rhetorically protect a diverse and multicultural society remains as important as ever.

At the end of Technopoly, Postman asks whether ‘a nation [can] preserve its history, originality, and humanity by submitting itself totally to the sovereignty of a technological thought-world?’ Parents in Silicon Valley clearly don’t think so – they’re getting their babysitters to sign legally binding ‘no screen time’ contracts and creating online message threads for spying on naughty nannies. Meanwhile, Google and Mark Zuckerberg fund school programs and technology in generally poorer areas. It seems the digital divide is no longer about access to technology, but freedom from it.

What might that freedom look like? In an always-on, digitally connected, consumerist age, this is a difficult question. Individually, we can all feel technology tugging at the edges of our attention. Every little micro-distraction pulling us further from deeper aspirations. A sanctuary for attention is called for.

Regulated sanctuary is proving hard to achieve. Big Tech has deep pockets and a head start. Governments are waking from an almost fatal inertia bred by years of neoliberal business-friendly rhetoric denouncing regulation and rules as so much red tape. A concerted push in Europe and the US in the second half of 2019 to clamp down on anti-competitive or monopolistic practices in Big Tech is certainly welcome. Yet even the European Commission’s EUR€8.25 billion (AUD$13.4 billion) in fines slapped on Google over three rulings since 2017 pales in comparison with Google’s yearly profit (AUD$190 billion in 2018). Sensible suggestions such as disabling the ‘like’ feature for younger users have made no headway. ‘Age-appropriate design’ remains a pipedream in government reports and inquiries. Take Facebook Messenger Kids: it may provide a safe forum for parent-child communication, but in truth it just gets around Facebook’s thirteen-years-and-over rule and creates brand loyalty from a younger age. Despite years of Facebook apology tours, lawsuits against Big Tech and widespread public outrage about the abuse of our data and the theft of our attention, not much has changed.

In the US, competition law is called antitrust, a poetic and apt summary of the situation in which people find themselves when it comes to data and attention. We don’t trust ourselves – and we certainly don’t trust our kids – to have the strength of mind to push back. We might even need technology to set the limits of appropriate use: if you can’t trust the app not to be addictive, trust another app to solve the problem. When it comes to trust, what is perhaps most contradictory here is that we trust tech more than any other industry. In the 2019 Edelman Trust Barometer, 78 per cent of respondents trusted the technology sector, while only 47 per cent trusted government and the media. These tech companies have proven time and time again that they are undeserving of our trust – but convenience and slick design have bought our loyalty from a young age. Is it plausible to hope for an effective global regulatory framework for data use in the face of statistics such as this? Perhaps not: Europe’s General Data Protection Regulation may set stricter rules for breaches, but adults and kids alike willingly give away their data. Privacy is a declining societal value. For any regulatory framework to provide real sanctuary, it would need to target the business model itself: the sale of data to third-party advertisers.

Technological sanctuary will come from the inside. This is about the tech sector developing a conscience. A public push for ethical technological design began with James Williams and Tristan Harris, both former Google employees, co-founding the Time Well Spent movement with former Couchsurfing CTO Joe Edelman in 2013. Their aim was to remake the technology industry, particularly the software it produces, around meaningful interaction, rather than distracted time that serves advertisers. Harris has since developed the movement into the non-profit Center for Humane Technology, the aim of which is ‘to realign technology with humanity’. The sales pitch is excellent, but the details remain fuzzy. In Andy Liddell’s 2019 Medium piece, ‘Sanctuary technology: Transcending surveillance capitalism’, he speaks of his desire ‘to frame an alternative to the surveillance paradigm’. But defining the values of a new age of technology doesn’t explain what that technology will look like or what it will do. Beyond the legitimate worry about today’s technology and the evangelical zeal to create something different, what comes next is vague. No matter how much humanity might need this change, without a concerted push from users – us – it’s hard to see the tech giants jumping to change much just yet.

For now, sanctuary for the self is our attention’s best hope. Self-regulation comes down to conscious use and quality screen time – FaceTime with a close friend rather than Facebook with a thousand fake friends. Algorithms struggle to make new, inhuman versions of people who use ad-blockers, consume less and do more offline. Encouragingly and contrary to typical fears, younger people generally use ad blockers more than adults and are adept at manipulating their online environment and privacy settings. Where they fall down is higher sharing of personal information in (mostly) private fora. Facts like this underline where we should refocus our attention when it comes to youth and technology – it’s not the screen time that matters, really, it’s the values you learn that shape the human you’re going to become. Parents and technology can help or hinder that process.

Faced with the flagrant abuses of the tech giants, individualising our responses seems a meek acknowledgement of defeat by the surveillance-capitalist machine. But even if we’re not the source of the problem, admitting there is one is, as ever, the first step to resolving it. As we meet the ethical and psychological trade-offs in technology ownership and use, we all have choices to make, especially if the tech giants won’t make their own. My alarm clock doesn’t solve the questionable labour arrangements involved in the production of my phone, but it does give me more control over my relationship with the digital world.

In The Attention Merchants: The epic struggle to get inside our heads (Atlantic Books, 2017), Columbia Law School professor and author Tim Wu suggests that everything changed when advertising entered ‘what had been for millennia our attention’s main sanctuary – the home’. Now it’s so close it’s warming your thigh or your palm. This is certainly a great experiment we’re living through, and if we can’t opt out of the petri dish altogether, at least we can understand why we’re there to begin with – young and old alike.

Get the latest essay, memoir, reportage, fiction, poetry and more.

Subscribe to Griffith Review or purchase single editions here.

Griffith Review