Science in an age of scepticism

Coping with a new age of controversy

Featured in

  • Published 20200204
  • ISBN: 9781925773804
  • Extent: 264pp
  • Paperback (234 x 153mm), eBook

IT IS FREQUENTLY claimed that science is facing a ‘crisis of trust’ in what has been termed this current ‘post-truth era’. As the Australian Chief Scientist Alan Finkel noted in early 2017, ‘science is literally under attack’, and the public debate around vaccines, climate change, genetically modified organisms, complementary medical therapies and a range of other issues speaks to a significant degree of public disagreement and dissent about some aspects of the processes and promises of science. In a 2017 survey in the United States, only 35 per cent of participants indicated that they had ‘a lot’ of trust in scientists – and the number who claimed to trust scientists ‘not at all’ had increased by more than half compared with a similar poll conducted in 2013. In the face of volumes of evidence and counterevidence from so many disciplines and experiments, as well as the diverse products and possibilities presented by science, it is perhaps understandable that some can feel overwhelmed. ‘Science’ itself has been brought into doubt and even disrepute.

Hence the call to restore public ‘trust’ in science, as evidenced in the marches for science held through 2017–18, petitions signed by numerous scientists to underscore the pressing climate crisis, and other initiatives. For participants in these events, the idea of trusting science is fundamental. When asked why we should trust science, the climate activist Greta Thunberg replied: ‘Because it’s science!’

My background in history and philosophy of science, as well as applied ethics and science policy, forces me into a conflicted position in these debates: although I find investigating scientific practices utterly compelling, and spend much of my time exploring them, I also know that history is never linear, and science in particular is never a straightforward march towards an unchanging goal. To expose the complexities associated with scientific practices, including setbacks, does not mean that I am ‘anti-science’: quite the contrary.

What has often been neglected in these discussions, but is critical, is the way in which the public engages with science: the key issue is not how to stop them believing ‘fake news’. There is no doubt that the media in all forms, social and otherwise, significantly influences how members of the public shape their opinions. And we face pressing questions about whether the media should make more active efforts to push back against traditional approaches that emphasise the need to provide ‘balance’ – The Conversation recently signalled that it will not run stories that endorse climate change ‘denial’, although how this will be judged undoubtedly raises a range of additional issues, as does the matter of what counts as ‘impartiality’ in media coverage. The idea of ‘balance’ can indeed be dangerous, if the voices that represent the ‘other side’ are not grounded in adequate evidence or are aimed at misleading readers or consumers. But an equally important part of the picture about our relationship with and understanding of science relates to that recent emphasis on the need for ‘trust’.

TRUST IS A central part of many of our human relationships with family, friends and our wider circles who support us in various ways. We confide in our lover or best friend knowing they will not betray our confidence; we trust our doctor to do their job and care for us.

In this way, trust is an extremely powerful concept, but also a very fragile and dangerous one: it requires assuming a certain level of risk. As the philosopher Onora O’Neill puts it: ‘Where we have guarantees or proofs, placing trust is redundant.’ Trust therefore involves vulnerability, especially in the relationship between the person who trusts and the person or institution that is trusted. We typically trust in someone to do something particular, such as perform a certain task or role. In this way, we tend to attribute competence to them in a specified domain. Our understandings of trust in other sorts of entities – certain governments, institutions and so on – are modelled on the sort of interpersonal trust we have in our close relationships: we trust our employer, government, particular non-governmental organisations and businesses, and some parts of the media depending on a range of factors, much as in the case of interpersonal trust.

The risk associated with trust is that a person or institution we trust may fail to behave as we expect or hope. This highlights the difference between trust and reliance: we can rely on inanimate objects, such as our car or phone, but strictly speaking we are not betrayed if they don’t work properly. The same is not true in trust-based relationships, and failures can result in a variety of negative outcomes – some relatively minor, but others with deeper consequences. For many people, a betrayal of trust often undermines or leads to the break-up of a relationship, be it with a partner, a relative, a tradesperson or even an institution. Consider the recent loss of trust on the part of many people in institutions from the Catholic Church to the big banks in Australia.

Given that trust is so fragile, how do we know whom to trust? For trust to be well grounded and justified, certain conditions must be present. We must think well of the trustee, believe them to be competent in a certain domain and attribute certain kinds of motives to them. Trust is also deeply bound up with our emotions: we cannot rely on rationality or logic to work through every instance where we might trust someone; we depend on intuitions and other cues.

Finally, even if we view science as a sort of ‘institution’, we know that trust in institutions is shaped by a variety of social and political forces: according to the 2018 Wellcome Global Monitor, the more unequal a nation, the greater the distrust of science among its population – although there are significant differences depending on the issue and the specific location. Trust in science is much higher in northern Europe than in South America and central Africa. But Bangladesh and Rwanda have the strongest confidence in vaccines – almost all people in both countries indicated that vaccines are safe, effective and that it is important for children to be vaccinated – while France had the lowest.

BUT THE VERY suggestion of restoring ‘trust’ in science is misguided for a variety of reasons. First, the idea of ‘trust in science’ could be viewed as a sort of category error: it involves attributing a property (trust) to a thing (science) that could not possibly have that property. Science is not a homogeneous or specific institution: it is a widely dispersed, highly heterogeneous and ever-evolving set of practices. Hence although trusting particular institutions (or very homogenous groupings, such as the Australian four ‘big banks’) is a reasonable concept, ‘science’ as a whole does not possess the right attributes to be the sort of thing we should trust as a whole.

Even if we think in some more informal sense that we are able to ‘trust in science’, this approach is destined to fail for numerous reasons. For trust to exist, there must be conditions of mutual respect. Unfortunately, a growing lack of respect for the non-scientific public is often evident in both popular and social media, as well as among some scientists. Scientists as a whole are not necessarily to blame for this, but instances where a public lack of knowledge or understanding is equated by scientists or others with a wilful refusal to accept evidence, or even with outright irrationality, are growing. While some portion of the public may fit the label of science ‘deniers’, our broad reliance on various technologies based on science and the ways in which we incorporate scientific findings in our everyday lives (for instance, when we take medication or use a host of constantly updating and novel technologies, such as mobile phones or the internet) suggests that few people either deny or dismiss science as a whole, or the methods used in science in general.

In reverse, it’s possible to find dismissive attitudes among scientists and other experts towards lay perspectives that can make valuable contributions to scientific investigations. These might come from those on the ground, such as farmers; from other professionals; or from citizen scientists.

As Brian Wynne, a sociologist of science, notes, the non-expert world is not ‘epistemically vacuous’ – it provides important contributions to knowledge, scientific and otherwise, and often valuable insights about the natural world. The current heightened atmosphere associated with discrete and highly contested issues (vaccines, climate change and so on) has undermined mutual respect, particularly of scientists for non-scientists.

The converse is also the case: members of the non-scientific public are less confident than previously that science should be ‘thought well of’ or respected. Reasons for this range from concerns about conflicts of interest (especially related to commercialisation and financial profits in relation to drug companies or ‘big pharma’), as well as high-profile cases of fraud and misconduct (consider the recent case of the first gene-edited babies, where the scientists involved were found to be in conflict with local laws, ethical regulations and international standards). Both instances suggest a lack of rigorous oversight. At an even deeper level, there is increased public awareness of newly developed technologies not only in terms of risks that might accompany them (in cases such as genetic modification, nanotechnologies or artificial intelligence), but also what their potential benefits might be – and to whom those might accrue.

The general public considers and evaluates risks differently than technical experts, regulators and scientists. Public assessments of risk are deeply connected to broader social and political contexts: as anxiety about the future increases and political instability rises in even established democracies (such as the UK and US), the resulting rise in cynicism and negativity has influenced the reputation of science generally, and will continue to do so, especially if new technologies are said to promise fantastic benefits that cannot then be quickly delivered.

Part of this scepticism arises from the oversimplified manner in which science is often portrayed: a focus on discoveries and some finding of ‘truth’ undermines a broader understanding of the complex processes associated with scientific research. There’s also an issue when milestones for success are set too high. Science is often surrounded by triumphalist narratives that promote hype about its achievements and potential, in part via the media but also sometimes originating in science itself, with, say, the cure for cancer around every corner. Increased competition for more limited funding as well as job precarity further fuel these trends. And the ‘crisis of replication’ revealing that many scientific studies are difficult or impossible to replicate or reproduce has also contributed to concerns about science’s reliability. In 2016, Nature surveyed more than 1,500 scientists, more than 70 per cent of whom said they had ‘tried and failed to reproduce another group’s experiments’.

At the same time, emphasis on ‘the scientific method’ reduces science to a one-dimensional caricature, displaying no understanding of the variety of methods that might be in play in the field in question, or of the ways in which these methods have evolved over time. Perhaps even more importantly, emphasising ‘the scientific method’ relies on an overestimate of how much people value what is unique about scientific knowledge compared with the value they place on everyday knowledge.

Historians and philosophers of science have long noted what Naomi Oreskes calls the ‘instability of science’ (described by Larry Laudan as the ‘pessimistic induction’, which claims that since past scientific theories that were previously successful have subsequently been found to be false, we have no reason to believe that our currently successful theories are true, approximately or otherwise). Many scientific findings and theories have been refined, changed or overturned over time as part of the processes associated with science. Why should we expect it to be otherwise? One of the positive aspects of science is that it remains open: findings accrete over time, or on rare occasion undergo major changes through criticism, review and reflection.

If science is viewed as an equivalent to truth about the natural world, then it is no surprise that disagreements between scientists are viewed as evidence of incompetence or worse. Yet scientists themselves are now – likely inadvertently – eager to promote ‘science’ and its findings as unassailable and not open to debate. This could be said to have occurred in the marches for science: either we were for or against.

Discouraging the non-scientific public from doubting or questioning creates another tension. Combined with a more general decline in optimism, it’s no surprise that the public struggles to know how it should view science, what its relationship to it is and what science can do for them.

In short, the required conditions to establish public trust in science can’t exist – not only because of the type of ‘thing’ science is, but also because of general pessimism on both sides, questions of appropriate motivation, competence and reliability, and an unwillingness to take risks and be vulnerable.

In many ways, this summer’s debates over the Australian bushfires provide evidence of just how conflicted our relationship with science is: as one commentator noted, there has been a tendency to engage in ‘slanging matches’ rather than debate or dialogue. Too often, opposing extremes are presented rather than a suite of detailed evidence with all of its complexities and uncertainties. What should be fora for debate and communication have become knockdown fights between people already entrenched in their views, lacking in constructive engagement and providing no basis for establishing a shared way forward.

IF THE CONCEPT of ‘trust’ is not the correct one to apply to science, what are our other options? One enjoying renewed popular support emphasises the need for more science literacy or education – but this implicitly equates an increasing lack of support for (or distrust of) science with a lack of information. This approach would reduce public involvement in science to something unidirectional, with non-scientists regarded as empty containers waiting to be filled up with knowledge. Such a model has not been productive even in the best of times, let alone in an era rife with suspicion and concern. It also tends to reinforce the oversimplified view of science in terms of progress and correct answers, rather than as a process that takes twists and turns, including errors and setbacks.

A key part of finding a more productive way forward is to set societal goals appropriately: rather than aim to restore ‘trust’ in science, we could instead focus on increasing everyone’s engagement with various scientific practices. Such a proposition is risky: the evidence from social science suggests that after discussing a particular issue, policy or technology, people often become more entrenched in their beliefs than they were initially, having gained more insights into its various complexities. This can lead to an initial polarisation in groups that have been exposed to and discussed a range of views, with people splitting into extreme positions rather than coming to a middle ground or consensus. And greater knowledge of science and use of more ‘rational reasoning’ has also been linked to more polarisation where issues are politically loaded, such as around climate change. In this way, as the psychology and legal scholar Dan Kahan puts it, our science communication environment is ‘polluted’. But authentic engagement in dialogue about matters of common concern can have positive outcomes over time.

The first step towards more constructive engagement with science is for us to reconsider what science is. Numerous science studies scholars (including Brian Wynne, Bruno Latour and Sheila Jasanoff) have convincingly argued that scientific knowledge is in an important sense co-produced by scientists and non-scientists in broader society. The key notions here are that scientific ideas and associated technologies evolve together with the ways in which these ideas are understood and represented by society, and that multiple stakeholders should contribute knowledge and expertise – particularly when making policy or other decisions related to science and its applications. The public are increasingly involved in funding decisions (in some instances, people with relevant disease conditions can select which trials are funded in regenerative medicine in the US) or contributing to methods (smallholder farmers have been enabled to guide weather monitoring and modelling in Argentina). Examples such as these make science more ‘actionable’ because results are both more relevant and likely to be beneficial to the population that has been engaged. This position need not result in complete relativism or a popularity contest: various checks and balances remain (such as peer review, openness to criticism, mechanisms for generating consensus and so on) that allow science to be a useful and reliable form of ‘social knowledge’, as defined by the philosopher Helen Longino. But to ignore the fact that science is a collective process is neither accurate nor useful: as a collective enterprise with a robust culture of critique, it could bring the public back into its fold. After all, a significant proportion of the non-scientific public already recognises key norms of science, including transparency about methods and funding, and peer review and its related processes.

The next requirement is that we focus on shared values at the centre of our discussions about science: all of us, scientists and non-scientists alike, are part of a broader community within which science operates and to which it is accountable. As with other forms of citizenship, we must be active and willing to participate in deliberation and decision-making, even in domains where we do not have much expertise. The key here is not to debate specific results of science, but to make the broader structures that surround it transparent and robust through bidirectional and reflexive dialogue.

We also need to have more frequent and richer discussions about common purposes and the greater good to which science can be applied, such as has occurred in public consultations on topics ranging from pandemic resource allocation to environmental management. To become active science citizens, we must adopt principles associated with deliberative democracy and other participatory engagement approaches rather than engaging in divisive debate. We must be open to revising our preferences and opinions based on arguments and reasons that are made in light of shared and common interests.

In this way, we need to use a range of collective strategies to improve the environment in which we engage with science. There is no ‘one-size-fits-all’ strategy here – no one method of engagement can ever definitively represent ‘the public’ (which is not a homogeneous entity). Successful engagement tends to occur as part of wider efforts in relation to other public participatory activities and not one-off events with specific impacts.

The advantage of an engagement-based rather than a trust-based model is that the former opens up a wider space for considerations of ‘conflicts’ in views, including value conflicts, different prioritisations of interests, contested knowledge claims and so on, and allows more prospects for their resolution. In addition, it permits more adequate incorporation of an accurate view of science and the knowledge produced through scientific processes. It emphasises the distinct methods that exist within science, and views science as a set of processes rather than a product or a set of truths about the world around us. It also allows for more diverse notions of relevant expertise, rather than pitting ‘science’ against all other forms of knowing. It is critical that we embrace these complexities, as they are an essential part of science and of how it might produce public benefit compared with more reductionistic approaches.

Perhaps most importantly, an engagement approach to our relationship with science explicitly demands that we consider different and more complex answers to the question of why science is important. Rather than seeing science as providing inviolable truths and assuming its cultural prominence, it requires all of us to be reflective about how science is practised, and why and when we view it as reliable and useful. It underscores scientists as humble – which Naomi Oreskes identifies as a critical component of well-functioning science – and as realistic about what they do and what they can do. If the practice of science were to go underground due to fears about its public reception, many dangers would result – not least of which would be the non-scientific public becoming even more alienated from its processes.

Science and its practitioners need to continue to open themselves to scrutiny, dialogue and engagement, rather than simply asking to be trusted. To correct this category error, we must emphasise a different model based on engagement and dialogue.

References

Anonymous (2017). Putting science at the centre of society (editorial). Sydney Morning Herald, 21 April.

Anonymous (2018). The best research is produced when researchers and communities work together (editorial). Nature, 562(7725), 7.

Baier, Annette (1986). Trust and antitrust. Ethics, 96(2), 231–60.

Barber, Bernard (1987). Trust in science. Minerva, 25(1), 123–34.

Beck, Ulrich (1992). Risk society: Toward a new modernity. SAGE, London.

Chilvers, Jason and Kearnes, Matthew (2019). Remaking participation in science and democracy. Science, Technology and Human Values.

Climate change deniers are dangerous—they don’t deserve a place on our site (2019). The Conversation, 17 September.

Daigle, Katy (2019). Public trust that scientists work for the good of society is growing. ScienceNews, 2 August. https://www.sciencenews.org/article/public-trust-scientists-work-good-society-growing

Goodall, Jane. (2020). Inflammatory exchanges. Inside Story, 7 January.

Hutchens, Gareth. (2017). Australia’s chief scientist compares Trump to Stalin over climate censorship. The Guardian, 6 February. https://www.theguardian.com/science/2017/feb/06/australias-chief-scientist-compares-trump-to-stalin-over-climate-censorship

Jamieson, Kathleen Hall et al. (2019). Signaling the trustworthiness of science. Proceedings of the National Academy of Sciences of the United States of America, 116(39), 19231–19236.

Jasanoff, Sheila (ed.) (2004). States of knowledge: The co-production of science and the social order. Routledge, London.

Kahan, Dan (2012). Why we are poles apart on climate change. Nature, 488(7811), 255.

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Harvard University Press, Cambridge.

Laudan, Larry (1981). A confutation of convergent realism. Philosophy of Science, 48(1), 19–49.

Longino, Helen (1990). Science as social knowledge: Values and objectivity in scientific inquiry. Princeton University Press, Princeton.

McLeod, Carolyn (2015). Trust. In Edward N. Zalta (ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/fall2015/entries/trust/

Muller, Denis. (2020). Media ‘impartiality’ on climate change is ethically misguided and downright dangerous. The Conversation, 31 January.

O’Neill, Onora (2002). A question of trust. Cambridge University Press, Cambridge.

Oreskes, Naomi (2019). Science isn’t always perfect—but we should still trust it. Time, 24 October.

Oreskes, Naomi (2019). Why trust science? Princeton University Press, Princeton.

Rabesandratana, Tania (2019). These are the countries that trust scientists the most—and the least. Science, 19 June. https://www.sciencemag.org/news/2019/06/global-survey-finds-strong-support-scientists

Simis, Molly J. et al. (2016). The lure of rationality: Why does the deficit model persist in science communication? Public Understanding of Science, 25(4), 400–414.

Tsipursky, Gleb (2018). (Dis)trust in science: Can we cure the scourge of misinformation? Scientific American, 5 July. https://blogs.scientificamerican.com/observations/dis-trust-in-science/

Wynne, Brian (1996). May the sheep safely graze? A reflexive view of the expert–lay knowledge divide. In Scott Lash, Brian Wynne and Bronislaw Szerszynski (eds.), Risk, environment and modernity: Towards a new ecology. SAGE, London. pp. 44–87.

Share article

About the author

Rachel Ankeny

Rachel A Ankeny is Professor of History and Philosophy and Deputy Dean Research in the Faculty of Arts at the University of Adelaide, and...

More from this edition

Iris

PoetryI was told experience mattered. This is a lie, at least when it comes to light. I’ve drunk decades of it and still I cannot describe the...

On being sane in insane places

MemoirI’VE BEEN THINKING about how my body inhabits place and how it changes – fluctuating between comfort and pain – depending on the state of my...

Stay up to date with the latest, news, articles and special offers from Griffith Review.