Monday 30 July 2012

Is this scientific suicide?

I have ranted many times about the insane need for academics to publish at all costs, which has led to a profusion of cases of plagiarism and multiple- submissions, and the almost total reliance on impact factor in choosing where to publish their material.

So it was interesting to read a blog posting by David Colquhoun, Professor of Pharmacology at University College London, which was also printed in this morning’s Times. I have copied it verbatim below and invite your comments:

There are moments when the way a university runs its affairs is so boneheaded that it deserves scorn far beyond the world of academia. Queen Mary University of London is selecting which staff to sack from its science departments in a way that I can describe only as insane.
The firings, it seems, are nothing to do with hard financial times, but are a ham-fisted attempt to raise Queen Mary’s ranking in the league tables. A university’s position is directly related to its government research funding. So Queen Mary’s managers hope to do well in the 2014 “Research Excellence Framework” by firing staff who don’t publish a paper every ten minutes.

To survive as a professor there you need to have published 11 papers during 2008 to 2011, of which at least two are “high quality”. For lecturers, the target for keeping your job is five papers, of which one is “high quality”. You must also have had at least one PhD student complete their thesis.

What Queen Mary defines as “high quality” is publication in “high-impact journals” (periodicals that get lots of citations). Journals such as Nature and Science get most of their citations from very few articles, so it is utterly brainless to base decisions about the quality of research from such a skewed distribution of citations. But talk of skewed distribution is, no doubt, a bit too technical for innumerate HR people to understand. Which is precisely why they should have nothing to do with assessing scientists.
I have been lucky to know well three Nobel prizewinners. None would have passed the criteria laid down for a professor by QMUL. They would have been fired and so would Peter Higgs.

More offensive still is that you can buy immunity if you have had 26 papers published in 2008-11, with six being “high quality”. The encouragement to publish reams is daft. If you are publishing a paper every few weeks, you certainly are not writing them, and possibly not even reading them. Most likely you are appending your name to somebody else’s work with little or no checking of the data, let alone contributing real research.

It is also deeply unethical for Queen Mary to require all staff to have a PhD student with the aim of raising the university’s ranking rather than of benefitting the student.

Like so much managerialism, the rules are an active encouragement to dishonesty. The dimwitted assessment methods of Queen Mary will guarantee the creation of a generation of second-rate spiv scientists. Who in their right mind would want to work there, now that the way it treats its scientists is public knowledge?

2 comments:

  1. Hi, I have seen a lot of this over the years and the best description I have heard of this phenomenon is "gaming". They have made science into a game and all you have to do is watch BBC news about the Olympics and you will see what the pressure of winning makes people do! It has been said many times before that when one tries to quantify the output of creative people then creativity goes out the window. It seems the attitude of the adminstrations are that if they can't quantify it then it must be no good. What is missing here is scientists speaking out and saying this is not acceptable. Sad actually!

    Chris Pickles

    ReplyDelete
  2. Comments received via the Minerals Engineers Group (http://tinyurl.com/c5d6wkp)

    Gerald Cadwell • I also think that it is a great article so for my cents worth. Since when have administrators and bean counters ever cared about excellence in science as one brilliant paper is worth a thousand others but numbers are all that matters to administrators who can not even understand the difference.

    Robert Seitz • The failure to arrive at great measures for measuring the performance of academics has lead us here. The role of university as research vs. teaching institute is unclear and the role of academics in the latter remains unclear. We have picked measurements which are easy to compile, yet may not relate to desired end state! The administrators must show they are managing though - yes? I wonder what measures there are of success for university administrations?

    The push to publish is ever more apparent in the quality of submitted work that does not make the light of day.

    I read an interesting article on open source publishing recently, i.e., governments requiring all info to be accessible to all as they are paying the bill. It will be interesting to see how that interacts with impact factors and the like.

    Andrew Okely • It is a sad world when the impact of a university on society is going to be measured over such a short period of time. Having been through the literature review process myself I was appalled at how many "papers" were nothing more than reproductions of the same work either by the same authors or worse still by other authors. I think my average was 3 in 100 papers on a topic actually adding new thought. As with all things we will get the institutions we deserve if such behaviour continues.

    Martin Bakker • Unfortunately the answer to the question is probably yes. The "publish or perish" mantra is extremely damaging. Why do we encourage academia to "recycle" the same work, sometimes we even only the slightest modifications, in multiple journals. These metrics are truly insane, they devalue the educational facility, and the journals that print the same articles. And then the universities wonder why they cannot attract industry support !!!

    ReplyDelete

If you have difficulty posting a comment, please email the comment to bwills@min-eng.com and I will submit on your behalf