AMONG THE HIGHEST hopes for Australia is our intellectual capacity. We already have a substantial profile in education and research, underpinned by a vigorous culture of independent debate which promotes original scientific ideas as well as theory and analytical narrative in the humanities. Our sceptical and anti-authoritarian temper, already manifest among primary school children, serves us well when it comes to challenging canonical verities.
Australia is also capable of producing a lateral critique of the world thanks, among other factors, to the independent non-globalised worldview of our Indigenous people, whose artistic productions have shown cultural leadership on a world stage. So Australia is good for thinking. But I wonder: is it good for research? When it comes to how we do research – which perhaps represents the pinnacle of thinking – what moral, creative and cultural leadership does Australian research management offer?
Contemplating the criteria that the Australian Research Council uses for evaluating applications (itself mirrored in numerous other research selection and evaluation processes) presents a potential moral lacuna. A very large proportion of the ARC’s judgement is attributed to the applicant’s track record, begging the question: is it fair?
Imagine an undergraduate marking rubric where 40 per cent of the grade is attributed to the marks that you got in your previous essays. Throughout secondary and tertiary education, we scrupulously hold to the principle that the work of the student is judged without prejudice on the basis of the quality of the work. The idea that we might be influenced by the student’s grade point average is preposterous. Either the academics who mark have the faculty of judgement to assess independently or they do not; if they cannot rely on their independence of judgement, they should not be in the business of assessment.
Research managers would argue that grant processes are not about assessing research but assessing a proposal for future research. Proposals are funded on the basis of past research – which is reckoned to be predictive – as well as ideas for new work; and this prospective element makes it more analogous to a scholarship, which is decided on the basis of past scores in undergraduate performance. But the problem with this logic is that each of those past scores from school to honours is established on the basis of fully independent evaluations (where at no stage is past performance counted), whereas many of the metrics used in research have a dirty component of past evaluations contaminating fresh judgements. Track record is a kind of aristocratic capital inherited from one circumstance and passed on to another; and, like some reputational arrogance, it is held up by an inscrutable chain of congealed approbation.
To be fair, there are parallels between research grants and employment – it could be held that the contract established between funding body and academic is analogous to getting a job. As with a selection process for appointing applicants to an academic post, we are happy to aggregate the judgement of others in previous evaluations; we assiduously examine the CV and we assume that previous judgements were independent in the first place. But is this not a portrait of moral complacency, where we justify one flaw by the consolation that it is no worse than another flaw?
A good selection panel will, in any case, take the track record with due scepticism; after all, dull and uncreative souls could walk through the door with a great track record. If the selection panel is earnest about employing the best applicant, its members will read the papers or books or musical scores or whatever the applicant claims to have done, irrespective of where they are published, on the principle that you cannot judge a book by its cover. The only reason that research panels attribute 40 per cent weighting to track record is so as not to have to make a fully independent evaluation and take responsibility for it.
If, as an art critic, I relied on track record for even 5 per cent of my judgement, I would be considered incompetent and ineligible for the job. It would be professionally derelict to stand in front of an artwork and allow my perception to be swayed by the artist’s CV; and I am not sure what makes it more ethical to institutionalise such prejudice with a research proposal, which also deserves a totally innocent experience of merit. My judgement must absolutely not defer to anyone else’s, even to a small percentage. If it does, I disappoint the public in its expectation of independence and, above all, I disappoint my conscience.
MY CONCERN IS not with the ARC, which is no worse than the several institutions that are its supplicants. My concern is with research management as an arbitrary code across Australian institutions, which is less than creative and open to moral questions.
The excellence of institutions is understandably tied to their research. But how do we measure research – which has been the subject of the Excellence in Research for Australia (ERA) initiative – when the measure is likely to dictate research production and promote research in its image? Sadly, while the ERA had the potential to realise an unprejudiced and independent evaluation exercise, it adopted the prior evaluation dependency which characterises most processes in research management. In 2010, the ERA evaluations were informed, among other things, by ‘Indicators of research quality’ and ‘Indicators of research volume and activity’. Amazingly, research income featured in both of these measures. Even volume and activity are measured by income.
So we substantially measure research according to income, irrespective of the proportion that goes to eyeballing publications – intellectual objects that one can read and judge for oneself as an independent scrutineer. Again, we should not especially blame the ARC, because it follows the typical patterns that institutions and external agencies worldwide have adopted in ranking universities. But with regard to the ARC, its embarrassments over the ranking of journals (gratefully rescinded by the government last year) were a typical case of research management trying to judge quality mechanistically. In essence, its leadership on that score amounted to judging a book by its cover, which even children are taught never to do; I still blush that so many top academics were complicit in maintaining that prejudice, like brand-snobs who will only consider Prada or Moschino or Kenneth Cole. And while the Commonwealth eventually revoked this backward practice, analogous distortions of judgement persist in the arts and humanities, as with the categories of publication that are determined on the basis of refereeing or commercial distribution (as if that matters or can ever be consistent) which have a retrograde influence on open-source publishing – so clearly progressive and in the public interest – through the internet.
How wonderful, then, that we have research income as an objective metric which avoids such embarrassments! Research income is the major driver, as they say, for institutional funding as well as being a key indicator in various league-tables that are not the responsibility of the ARC, the top places on which are jealously contested for obvious reasons. Inside institutions, research income is used to determine all kinds of benefits, such as Research Training Scheme places and scholarships for research graduates. And so we see the same problem. We judge merit by a deferred evaluation, in this case according to the grants that the research has been able to attract. It entrenches past judgement on criteria which may be fair or relatively arbitrary.
The grant metric is applied in various contexts with little inflexion beyond benchmarking according to disciplines. In any given field, academics are routinely berated for not attracting research funding, even when they do not need it. They are reproached for not pursuing aggressively whatever funds might be available in the discipline and which their competitors have secured instead. As a result, their research, however prolific or original in its output, is deemed to be less competitive than the work of scholars who have gained grants. So their chances at promotion (or even, sometimes, job retention) are slimmer. Such scholars live, effectively, in a long research shadow, cold and punished for their failure to get funding, even when the intellectual incentives to do so are absent.
Directing a scholar’s research by these measures might be suspected of being not only somewhat illogical but immoral. On average, the institution already directs more than a third of the salary of a teaching-and-research appointment toward research. That percentage should be enough to write learned articles and books, if that is the kind of research a scholar does. In certain fields, the only reason one might want a grant would be to avoid teaching or administration. But most good researchers enjoy teaching and think of it as immensely rewarding, a nexus which, in any other circumstance, we should be trying to cultivate. I have always found it sad that the rich synergies between education and research are implicitly devalued by an overriding structure which promises a delivery from teaching, as if this extrication is academically redemptive.
To get out of administrative duties is more admirable; however, even a $30,000 grant entails considerable administration, and with larger grants there is more employment, and thus more administrative work. You end up with more paperwork, not less, if you win a grant. The incentives to gain a grant are much less conspicuous than the agonies of preparing the applications, which tie the researcher into a manipulative game with no intrinsic reward and a great likelihood of failure and even humiliation by cantankerous competitive peers.
Because the natural incentives are absent, the unwilling academic has to be compelled by targets put into some managerial performance development instrument, where the need for achieving a grant is officially established and the scholar’s progress toward gaining it is monitored.
As a means of wasting time, this process has few equivalents; but if it were only wasteful, we could dismiss it as merely a clumsy bureaucratic incumbency that arises in any institution that has policies. But after a long period of witnessing the consequences (as one of those erstwhile academic managers) I suspect this wasteful system may also be morally dubious, because its inefficiencies are so institutionalised as to disadvantage researchers who are honourably efficient.
As a measure of the prowess of research, research income has a corrosive effect on the confidence of whole areas and academics who, for one reason or another, are unlikely to score grants. Research income is a fetishised figure – it is a number without a denominator. If I want to judge a heater, I do not just measure the energy that it consumes but the output that it generates as well; because these two figures stand in a telling relationship to one another: the one figure can become the denominator of the other to yield a further figure representing its efficiency.
To pursue this analogy, research management examines the heater by adding (or possibly multiplying) the input and the output. In search of a denominator, it then asks how many people own the heater and bask in its warmth. Similarly, we find out how many people generated the aggregated income and output. Sure enough, we attribute the research to people. But the figure is structurally proportional to income and therefore does not measure efficiency.
I question the moral basis of this wilful disregard for efficiency. Research management does not want to reward research efficiency and refuses to recognise this concept throughout the system. The scholar who produces a learned book or several articles every two years using nothing but salary is more efficient than another scholar who produces similar output with the aid of a grant. Alas, the concept of research efficiency is inimical to the structure of research management, because the mathematics of research quantification only contemplate research income as an arrogant numerator, perhaps to be multiplied by research output, but in no circumstances conceived as the pronumeral which divides research output to yield a construct of research efficiency.
The moral structure of Australian academia in its three main portfolios – coursework, research training and research – may be set out by analogy to certain historical epochs. Undergraduate studies may be likened to industrial modernism: efficient, keen on quality-control and risk-management, a bit impersonal to be sure, but economical, scrupulous, with aspirations to egalitarianism and a rigorous legal system to guarantee fairness of marks and opportunities.
Research graduate studies, on the other hand, would belong more to the stage of ethical development encountered in the Renaissance and Baroque periods. Once their candidates have been lured and taken into the fold, a seductive protocol of favouritism develops in the name of support: the patrician supervisors quite forget the rubrics and tallies that belong to coursework but function with as much nepotism as they wish in order to swing favours, indulgences and dispensations for their protégés and launch their careers in the court or the See.
But research? Research functions according to a yet anterior model, as if deep in the archaic past. It is feudal, consisting of thousands of rival knights in small principalities, each managing its hunger in relation to any opportunistic signals that spread across the land. Once elected to the peerage, a knight or lady can depend to some extent on aristocratic privileges, which – if sufficiently established by barons who are sufficiently fat – will yield promises, fearful pledges of continuities and support from princes. One of the main occupations of the knights is to keep pretenders out of the exclusive peerage.
If, suddenly, research efficiency became a factor in the formula – do not hold your breath – institutions would instantly scramble to revise all their performance management instruments, not because it is right but because there seems to be no moral dimension to research management, only a reflex-response to any arbitrary metric set by a capricious king. Individual cells of research management will do not what is right for research and knowledge and the betterment of the human or planetary condition but whatever achieves a higher ranking for their host institutions. Research management may be likened to sport in this regard, where the rules are largely arbitrary, and all that we can see is a contest that we are locked into and which we have to win to survive in the league.
It is commonly believed that research income as an indicator of quality is at least an economical metric, if not always fair. We tend to view such matters in a pragmatic spirit because we cannot see them in an ethical spirit. On the quality of funded research, I am personally agnostic because, when all is said and done, there is no basis for faith. There may be a strong link between research income and research quality, or there may be a weak or even inverse link, depending on the discipline and, above all, how we judge it. If a sage study were conducted in ten years’ time reviewing research in the arts and humanities, for example, many good academic souls would not be surprised should the report conclude that no book developing radical ideas was written during the period on the basis of a research grant, and most funded research can be considered unimaginative, judiciously dressing up orthodoxies as progressive increments in knowledge or theory.
Perhaps, being circumspect, one could say research management is less immoral so much as amoral – in the sense of outside morality or free of ethical judgement, on the basis that it pretends to science – but any argument to unburden the field from moral judgement, thanks to an aspiration to neutrality, is not persuasive. Research management is never in a position where it can be amoral, because it concerns the distribution of assets that favour and yield advantages, and being outside the sphere of moral judgement is not an option.
It is good that we have research grants, because they allow research – especially expensive research – to prosper more than it otherwise would; but the terms of managing research, which rely so heavily on a chain of deferred judgements and which yield invidious and illogical rankings, involve processes of dubious moral assumptions. We can accept that research management is inexact and messy. None of that makes it ugly or immoral, just patchy and occasionally wrong. But the structural problems with research management go further; they skew research and damage the academic psyche, which has the same tragic loss of good karma that is the outcome of every moral lacuna.
Lecturers commence their academic career as researchers and, from early days, are researchers at heart. They love research: they become staff by virtue of doing a research degree and are cultivated thanks to their research potential and enthusiasm. Bit by bit, and with many ups and downs, they divide into winners and losers: a small proportion of researchers who achieve prestigious grants and a larger proportion who resolve to continue with their research plans on the basis of salary, perhaps with participation in other workers’ funded projects and perhaps with a feeling of inadequacy, in spite of their publications, sometimes promoted by pressure from their supervisor. Within this stressful scenario, even the successful suffer anxiety; and for the demographic as a whole, the dead hand of research management makes them anxious about their performance. In relatively few years, academics become scared of research and see it as more threatening than joyful; they pursue it with an oppressive sense of their shortcomings, where their progress is measured by artificial criteria devised to make them unsettled and hungry.
Though we dress up this negotiation in the language of encouragement, it is structurally an abusive power relationship that demoralises too many good souls in too little time. It is not as if we do not know about this attrition of spirit, that many academics get exhausted and opt out of research with compound frustration for good reasons.
Research management, which governs the innovative thinking of science and the humanities, is neither scientific nor humane nor innovative; and my question, putting all of this together, is whether or not it can be considered moral or in any way progressive to match the hopes that we have in research itself. A system of grants, however arbitrary, is not immoral on its own, provided that it is not coupled to other conditions that affect a scholar’s career. This process of uncoupling research evaluation from grant income on the one hand and future intellectual opportunities on the other seems necessary to its moral probity. Is it ethically proper to continue rating researchers by their grant income simply because it is convenient in yielding a metric for research evaluation? The crusade to evaluate research has been conducted on a peremptory basis, either heedless of its damaging consequences or smug in the bossy persuasion that greater hunger will make Australian research more internationally competitive.
Is such a system, so ingeniously contrived to spoil the spirits of so many researchers likely to enhance Australia’s competitiveness? We were told at the beginning of the research evaluation exercise that the public has a right to know that the research it funds is excellent. But after so many formerly noble institutions have debased themselves by manipulating their data sets toward a flattering figure, we have no more assurance of quality than we did before evaluating it. Besides, no member of the public that anyone can name – other than perhaps one belligerent stirrer who also cast aspersions on the legitimacy of Aboriginal people – has ever entertained any doubts about the quality of Australian research. Who are these people who demand reassurance of quality in research beyond the bureaucrats who instigated the various schemes? The conspicuous public attitude to research is respect and admiration, bordering on deference. So I wonder if there is any justifiable basis for research evaluation other than to provide the illusion of managerialism, or perhaps a misguided ideology that identifies hunger and anxiety as promoting productivity? I see massive disadvantages in our systems of evaluation but fail to see any advantages.
To maintain this disenfranchising system in the knowledge of its withering effect strikes me as morally unhappy and spiritually destructive. It would take a diabolical imagination to come up with a system better contrived to wreck the spirits of so many good researchers and dishearten them with their own achievement. It needs to be rebuilt from the ground up and on the principle that dignifies the generosity and efficiency of researchers. I look forward to a time when the faith that the public has in our research is matched by the faith that researchers themselves have in the structures that manage them.
This provocation is continued in The Conversation…