The crumbling wall

I GREW UP in an era when science had an aura of certainty and solidity: it was 'the true exemplar of authentic knowledge', as the eminent sociologist Robert K Merton put it. History inevitably contains a subjective element, and there are different and legitimate views about the significance of a work of literature, but science was different. At school we learned which chlorides are insoluble and which metals are attacked by hydrochloric acid: no room for subjectivity or different interpretations there. We learned the laws of motion, as set down by Newton hundreds of years ago: not theories but laws that can't be broken. The science was incontestable, so your answer in the school test was either right or wrong. At university, all the physics and chemistry I learned as an undergraduate was solid, unquestioned knowledge.

Science spoke with a particular authority. It has been argued that other disciplines were affected by this perception; some observers think 'physics envy' led economics down the path of mathematical modelling and arcane theories that, applied to financial products, wrought havoc in the real world. That is another story.

Applying science had produced technical marvels that significantly influenced World War II, like radar, culminating in the fearsome weapons that obliterated two Japanese cities and brought the war to an end. Applying science to our domestic life gave us clean drinking water, protection from diseases, cleaner and faster cooking, better communications and a dramatically improved material lifestyle.

Governments cheerfully funded science, confident that the goose would continue to lay golden eggs. The highest-achieving school leavers were more likely to study science at university than arts, law or medicine, let alone courses in economics or commerce. The CSIRO, originally set up to improve our primary industries, extended its work into manufacturing and information processing. Most politicians worshipped the new deity and appeased it with regular offerings, though some were more cautious; Winston Churchill famously said that scientists should be 'on tap, not on top', working as directed.

To some extent, the perception that science produces permanent knowledge still applies in the laboratory. I recall my doctoral supervisor saying in admiration of a distinguished colleague, 'When he measures something it stays measured!' That fabled scientist spoke disparagingly of 'those romanticists', a research group he believed to espouse theories that went beyond the solid evidence from experiments. Predictability is not just a perception: ask a scientist to measure something under controlled conditions, the melting point of a specified alloy or the rate of reaction between two chemicals, and you can expect a precise answer. What's more, you can expect the same precise answer from any competent professional. In the artificial world of the laboratory, where all the relevant variables can be controlled, science gives clear and verifiable answers, with little room for subjective interpretation.


THE FIRST CRACK in the wall was the analysis by the physicist and historian Thomas Kuhn of what he called scientific revolutions. He used specific examples: the old earth-centred view of the universe and physics before the development of quantum theory. In each case the old science was working diligently within what he termed the dominant paradigm, a phrase that is now part of the political lexicon. He showed that 'normal science' accepted the existing theory and worked within it, collecting data and solving small puzzles. But the accumulating evidence led to increasing awareness that the prevailing theory was inadequate.

Even then, Kuhn argued, the old theory was not rejected until a new and better theory was developed to explain the observations. In each case, adherents of the old theory were understandably reluctant to admit that their life's work had been in vain. They often tried to find a contorted logic that fitted the new evidence into the old theory. Kuhn concluded the new theory would triumph only when proponents of the old one retired or ceased to have influence, leaving the field to the Young Turks who saw the improved explanatory power of the different approach: a scientific revolution.

I saw two examples of Kuhn's theory in action in the 1960s: the continents and the origin of the universe. The theory of continental drift had been around since the nineteenth century and made instinctive sense, since the east coast of South America is strikingly similar to the west coast of Africa and it looks to the amateur eye as if they could once have fitted together. But scientists could not imagine continents moving around on the face of the earth, and the theory was dismissed as populist nonsense. It all changed when one critical measurement found the sea floor spreading around the Mid-Atlantic Ridge, showing that the continents were actually moving apart. Within about five years, the old superstition of continental drift had become the new science of plate tectonics. This in turn made sense of a wide range of previously puzzling observations, from the continuing growth of the Himalayas (the result of the Indian Plate colliding with the Asian Plate) to the biological parallels in Africa, South America and Australia resulting from the earlier existence of the super-continent now called Gondwanaland.

In the case of the origin of the universe, there were two competing theories at the time, known as the steady-state model and what was condescendingly called the big bang, the idea that the universe was still expanding from a cataclysmic event about fourteen billion years ago. There was no solid evidence, so the two theories were both intellectually defensible.

It was my first experience of scientific controversy when the University of Sydney brought leading proponents of the two competing theories to a physics summer school. After George Gamow expounded the big bang theory, Thomas Gold stood up and told the audience why he could not accept that explanation. The following day the roles were reversed, with Gamow explaining why he could not accept Gold's equally learned exposition of the steady-state model.

The debate raged for another decade or so. The evidence steadily accumulated in favour of the big bang, but some supporters of the steady-state model found convoluted ways to reconcile the new observations with their preferred theory. Eventually one crucial measurement effectively resolved the issue. Calculations based on the big bang theory showed that there would be residual radiation now, at the very low temperature of 4.2 degrees Kelvin (or nearly -270 degrees Celsius), if their model was correct. When that radiation was detected, the argument was resolved.

In more recent times, a parallel was the debate in the scientific community about global climate change. The underlying basic science is well understood. The British physicist John Tyndall showed in the 1850s that carbon dioxide absorbs infrared radiation. Svante Arrhenius, the Swedish physicist and chemist, called it a 'greenhouse' gas in 1892, arguing that it had the same effect as the glass in a greenhouse. Glass is transparent to visible light, but absorbs infrared radiation. So when the sun shines on a greenhouse (or a car parked in the sun) the sunlight warms the interior. The heat would normally be radiated away, but the glass prevents this happening and the temperature rises, desirably in a greenhouse but uncomfortably in a car.

Arrhenius pointed out that the same 'greenhouse effect' occurs in the earth's atmosphere as sunlight passes through and warms the surface, but the radiation of heat into space is slowed by carbon dioxide, water vapour and other trace gases in the atmosphere. Two examples illustrate this effect. A clear night in winter is much colder than a cloudy night. The belt of water vapour on a cloudy night slows the radiation of heat away from the earth into the cold night sky. The large-scale example is the climate on the moon, which is the same average distance from the sun as the earth. The moon has no atmosphere, so the temperature plummets when the surface is not receiving sunlight. The difference between day and night is about 250 degrees Celsius, compared with ten to twenty on earth, and the average temperature is thirty-three degrees lower. There is no doubt that the 'greenhouse effect' exists and makes conditions much better for life on earth. Arrhenius calculated in 1892 that doubling the amount of carbon dioxide in the air, say by burning huge amounts of coal, would increase the average global temperature by four to five degrees.

In the 1950s scientists began to measure the increasing concentration of carbon dioxide in the atmosphere. By the 1970s some were expressing concern that this could change the global climate. In 1985 a critical international conference reviewed the evidence and said that the release of greenhouse gases seemed to be changing the climate.

Both greenhouse gas concentrations and average global temperatures were increasing, but cautious scientists warned that the evidence did not prove a causal link. The Intergovernmental Panel on Climate Change was set up to examine the evidence and recommend responses. Its four reports reflect steadily growing confidence that the recent changes in global climate are a direct consequence of human release of greenhouse gases. This has persuaded most politicians that concerted action is needed.

A small number of reputable climate scientists, a group you could count on the fingers of one hand, still say they are not convinced of the causal link. They have been supplemented in the public debate by a larger group of people, some scientists outside the specialisation but most with no scientific credentials at all, to argue against action. The attention given by the media to those in denial has created a public impression that the science is uncertain, whereas the science has been settled within the relevant community for at least a decade.


THE QUESTION OF the most appropriate response to this knowledge is a more complex matter, and science itself is of limited help. The science tells us that we need to curb the growth of greenhouse gas levels in the atmosphere, but choosing from the possible ways of achieving this involves economic, social and political issues as well as scientific assessments.

I chaired the advisory council that produced the first Australian report on the state of the environment. Our terms of reference allowed us to inform governments and the community about environmental problems, but not to recommend responses. Some saw this as a limitation, but I defended it as ensuring the validity of our report. The science can tell us, for example, if urban air quality is unacceptable. It can tease out the various contributions to the pollution levels. Since the main cause is motor vehicle exhaust gases, there are several possible responses. Each vehicle can be made cleaner, and this can be achieved by regulation or by financial inducements. The number of vehicles in the air shed can be curbed, again by regulation or financial incentives, or possibly by educating the community to recognise the health consequences of polluted air. The pattern of transport could be changed, perhaps by investing in better public transport. The entire transport task could be reconsidered, perhaps by measures to encourage people to work from home or closer to where they live. Weighing up the alternatives is mostly a balance of issues that are involved in the agreed goal; there is no right answer that science can give.

The same argument applies to climate change. We know that we must reduce the rate of releasing carbon dioxide and other greenhouse gases, especially methane. This could be achieved by using cleaner energy supply technologies, by improving the efficiency of turning energy into the services people want, or by phasing out pointless uses of fuel energy. We probably need to pursue all three approaches, but the balance between them is as much social, political and economic as it is technical. Most qualified experts agree that we need to move to electricity supply technologies that put less carbon dioxide into the air than the present system, with its heavy reliance on coal. But there are genuine differences about the alternatives. Professor Barry Brook of the University of Adelaide does not believe that renewable energy systems like wind and solar can be scaled up fast enough to meet our needs, so he supports investment in a possible new generation of nuclear reactors that could avoid the chronic problems of the current industry. I am still sceptical about whether those problems can be solved, so I don't support the idea that Australia should adopt nuclear power. Unlike Professor Brook, I agree with the argument of the climate change campaign Beyond Zero Emissions in their recent report, Zero Carbon Australia, showing how a mix of renewable energy technologies could meet all our needs by 2020. Those two different assessments of the present uncertain situation are both intellectually valid; they are simply differing value judgements.

This leads us to a more fundamental problem. Thirty years ago the American nuclear scientist Alvin Weinberg argued that there is a class of problems which can be stated in the language of science, which are technical questions within science's sphere of knowledge, but which cannot be answered in terms that are acceptable within the scientific tradition. The examples he gave were the operating safety of nuclear reactors and the health consequences of low levels of radiation. If we eventually operate enough nuclear reactors for long enough, he said, we will then have good statistics that would enable accurate safety estimates to be made, but even that hope is probably undermined by constant improvements in designs and operating systems.

In the case of ionising radiation, Weinberg argued, we can't conduct controlled experiments in which we systematically irradiate controlled groups by different amounts and then observe differences in their health. All we can do is monitor inadvertent exposure, and controlled processes like medical diagnostic exposures, and try to infer risk. Even if we had good data, he said, weighing up whether the slight increase in long-term health risk of exposure is justified by the benefits, real or alleged, of nuclear power or nuclear weapons is inevitably a value judgement.

There is a parallel in the approach we take to blood-alcohol levels. We know that alcohol affects our judgement. There is no threshold level below which the increased risk is zero, simply a decreasing risk as alcohol levels reduce. Different societies set different acceptable levels for drivers, trading off the increased risk of accidents against the social benefits of allowing moderate consumption of alcohol. For some classes of drivers, like those in charge of buses, we adopt a zero-tolerance approach. For the wider community, levels like .05 or .08 are simply a balance between competing demands.

Most people accept that we should not be gratuitously exposed to radiation, so there is still concern about the nuclear weapon tests of earlier decades. Much of the unease about nuclear power stems from accidents like the Chernobyl disaster, which spread radioactive debris across a wide area. Medical diagnostic procedures are usually justified because there is a clear potential benefit that outweighs the risk. Even there, the Australian regulator is concerned about the increasing use of whole-body scans as 'fishing expeditions' in the absence of clinical indications, fearing that it increases risk more than it improves measurable health outcomes. In the 1950s Australians had regular chest X-rays to detect tuberculosis; the tests were discontinued when the disease became so rare that the rate of diagnosis no longer justified the radiation exposure. Deciding how much radiation we can be exposed to is, like the blood-alcohol level, a trade-off between competing demands; science can't give us a right answer.


MANAGING IN THE new world of uncertainty is a challenge to political institutions, but it is also a challenge to scientists. Politicians usually want a clear answer, a yes or no rather than a cautious maybe. When I am asked for expert advice, I get the impression that my reputation as an expert demands an assured response. The American journalist HL Mencken is credited as saying that every complex question always has a simple answer - and it is always wrong.

It is important to be aware of uncertainty and give suitably qualified answers to complex questions. Science cannot say in advance whether genetically modified crops will have a disastrous impact on the natural ecology of a region, or whether a two-degree increase in the average global temperature will destabilise the Greenland ice sheet. The recent admission of this point by the UK's Royal Society was seized on with depressing predictability by the attack dogs of the Murdoch press to vilify those of us urging responsible action to slow climate change. For that reason some scientists are reluctant to admit uncertainty.

Some have a more general worry that the admission removes the cloak of authority from science. I think science actually has the opposite problem: a level of disillusion stemming from unwillingness to admit uncertainty. Scientific authorities confidently told the community that nuclear power was clean, cheap and safe. When it became widely believed that it is dirty, expensive and risky, the whole authority of science was questioned. This could have been avoided if scientists had been more guarded in their support for the technology. While most scientists were reluctant to give public assurances that 'mad cow disease' could not cross the species barrier and affect humans, some yielded to urgings from politicians and told the British public not to worry. The consequent outbreak of variant Creutzfeldt-Jakob disease did serious damage to the idea of scientific authority.

The New South Wales Land and Environment Court now has a process for dealing with scientific uncertainty in cases before it. When experts called by the two sides differ, the court can require them to produce an agreed statement summarising the area of common understanding, those questions on which they disagree and the evidence for the two contending opinions. This is a good model for the future. It recognises that science can't give simple answers to complex questions. In the real world of natural systems, there will always be areas of uncertainty, in some cases impossible to resolve on the time-scale required for big decisions. Scientists should be suitably modest about what we know and what we don't know, rather than overstating their confidence in our current limited understanding. Decision-makers need to assess risks and consider the consequences of being wrong. The precautionary approach should be applied seriously. That is a better approach than misplaced confidence in scientific authority.

Griffith Review