Monday 18 January 2016

Let's start to take bad science seriously

There is a lot of bad science around. A recent report by the Nuffield Council on Bioethics showed evidence of scientists increasingly “employing less rigorous research methods” in response to funding pressures. A 2009 survey found almost 2% of scientists admitting that they have fabricated results; 14% say that their colleagues have done so.
This is how I opened the posting 12 months ago and it would appear that things are getting worse rather than better. A recent report in The Times (January 5th 2016) suggests that public trust in science has become seriously undermined by work of poor quality, particularly in biomedical research. Editors of the journal PLOS Biology said that their discipline has a credibility crisis after several studies cast doubt on much publicly funded research. They concluded that there is an urgent need to improve the standards of research practice.
In an earlier article in The Times (December 4th 2015), scientific journals, universities and research institutes, funding bodies and scientists were accused of colluding in malpractice or misconduct by refusing to amend or retract research in which errors emerged. As many as 1 in 20 papers on biomedical science were said to contain errors or falsifications.
This is particularly worrying as peer-reviewed journals are meant to subject papers to scrutiny by experts and refuse to publish them if they fail to meet an expected threshold of scientific rigour. The report suggested that only a small proportion of biomedical research papers were subjected to genuine scrutiny before publication and concluded that the peer-review process was "clearly not fit for purpose".
Whether it is or not is something I do not want to rake up again, as the efficacy of peer-review was discussed at length on the blog in March 2011, the general consensus being that although the system is flawed in many respects, it is the best method that we have at present for assessing the merits of research. I have no intention of trying to reform the process, I will leave that to the next generation.
Fraud and bad practice is not limited to biomedical research of course, and I have been involved with many cases of plagiarism and other unethical practices while in charge of Minerals Engineering. The publish or perish mentality prevails in ours, as well as other areas of science and technology. Fortunately in our relatively small field these are fairly easy to spot, and the miscreants dealt with, ensuring that they do not have further dealings with the journal or any other Elsevier journals. But by far the biggest worry is bad practice due to ignorance of the proper way to collect and analyse data in mineral processing experiments, from simple lab tests up to full blown plant trials and the analysis of production data.
How do we deal with this? We are not talking about fraud here, but a lack of knowledge of the scientific method, by researchers and their supervisors, and also it has to be said, by members of the peer-review process, as papers with poor experimental design do slip through the net, finding their way into Minerals Engineering as well as other leading journals.
Education is obviously the key and fortunately we have on the Minerals Engineering Editorial Board a person who has crusaded for better practice in the statistical design of testwork. Prof. Tim Napier-Munn travels the world preaching his gospel via his short courses, and I would highly recommend that all mineral processors at least take a look at the short and highly readable article that he recently wrote for the AusIMM. The message in this article should be one that is etched into the minds of all final year undergraduates and I further suggest that Tim's excellent book Statistical Methods for Minerals Engineers should be required reading for all young researchers (and their supervisors) embarking on programmes of research involving experimentation.
I will be advising all Minerals Engineering reviewers to be vigilant, on the look-out not only for fraud, which leads to blacklisting, but also poor experimental design, which should lead to article rejection. I also ask for your comments, suggestions and any advice that you can offer to try to ensure that mineral processing is one area of research which does not fall prey to the criticism publicly heaped on the biomedical sector.

20 comments:

  1. Agreed Barry. It is the bad science causing the broad public to become anti-science and even become sceptical about good science. While you have some bad science in minerals processing and extractive metallurgy, the confusing signals from the dietary and health sciences, pharmaceutical sciences and climate science have stoked the fires of the public to not believing research outcomes... To many a scientific outcome is just as valid as some random opinion. It is a sad state of affairs.
    Jacques Eksteen, Curtin University, Australia

    ReplyDelete
    Replies
    1. To fuel it even more, Jacques, the University of Bristol published results recently of a "landmark" study that claimed diet drinks might be better than water in fighting obesity. Yesterday's The Times reported that this was funded by a food industry taskforce whose members included Coca-Cola and PepsiCo. ! It also suggested that the research may be 'fatally flawed' partly because it did not detail whether those who were supposed to drink water and avoid diet drinks consumed regular soft drinks instead!

      The UK Government Guideline on drinking alcohol has been revised this month, and amongst all the recommendations is that if a man consumes 14 units of alcohol per week his risk of an alcohol related death increases by 0.95% if drunk across 7 days, 1.44% if across 3-4 days and 4.65% if all in one go. Would love to see the science which led to these conclusions!

      Delete
  2. Barry, I think this remains a very real issue. I review regularly for a number of journals and there is a lot of bad science being submitted. It is very disheartening to read a manuscript with 6-7 authors that consists of a single set of short term batch experiments, with almost no controls and a discussion that shows no critical engagement with the data or any reference to supporting work. I am not sure if it is possible to be more explicit in the guide to authors section, highlighting that all authors must have made a meaningful scientific contribution to the work, that experimental work must include appropriate controls etc? I think that an important function of the review process is to provide meaningful feedback to the authors, but do sometimes feel like stopping after the methods section when it is clear that the work has been poorly conceived.
    Rob van Hille, South Africa

    ReplyDelete
  3. Barry, right on. It is not only in published work that poor science happens, in my world (industry) we suffer from the same issues, on a spectrum from mere incompetence to blatant fraud. On the one hand 'wishful thinking' leads to the statastical crimes like 'the fishing expedition', and this can be done without intent. I am also seeing a lot of 'semi-intent', where maybe people are fooling themselves - like cherry-picking data, ignoring inconvenient data, choosing controls that 'are easy to beat'. You cannot defend these acts as innocent because the tools to avoid them are out there (and you linked to some good ones).

    I personally don't see a lot of flat-out fake data luckily, this is becuase our ideas are tested by a commercialisation process pretty early on and flat out lies would come out in the wash.

    Indeed it is the scale-up that reveals exactly how even good scientists fool themselves.

    The trouble is, all as these problems are very hard to detect at the time. It means one almost has to assume the worst (zero faith) until a particular scientist has proven themselves over a long time with many proven successes. And then they have to maintain that reputation for life - a single fraud would (should?) mean the end of their career. Sad but my reality.

    ReplyDelete
  4. To paraphrase Andrew Lang "Scientists use statistics as a drunken man uses lamp posts - for support rather than for illumination." Statistics provides a set of tools that enable the scientist or engineer to be a better judge, they are not a substitute for the exercise of sound engineering judgment nor in and of themselves constitute scientific proof of anything. Researchers are often misled into making spurious deductions by regression effects. Without an underlying phenomenological model or hypothesis that establishes a causal link between the variable being measured and the effect being observed statistics are simply meaningless. Here's some, http://www.tylervigen.com/spurious-correlations . Andy Carter

    ReplyDelete
    Replies
    1. Exactly right Andy, though I think the Lang quote actually starts "An unsophisticated forecaster uses..." which absolves scientists from some of the blame. The antidote to nonsense correlations is to do properly designed experiments so that the statistically significant effects observed can be attributed to real factors. The best nonsense relationship I have seen (easy to find on the web) is an almost perfect negative correlation between US highway deaths and the amount of lemons imported to the US from Mexico (both driven presumably by time-based trends). Pity that's not a real effect because the strategy for reducing highway deaths further would then be simple, requiring only a modest change in diet for the American consumer.

      Tim Napier-Munn, JKMRC, Australia

      Delete
  5. I'm not certain the scientific method is still being taught, or rather should I more definitely say that it appears that it is not being overtly taught at uni level. It appears there is some assumption students (up to and including doctoral and post-grad) are smart enough and will just get it!

    One area I regularly discuss with engineers is problem solving. This includes interviewing interns and graduates and discussions with working engineers with varying levels of experience. My observation (with many data points across many years) is that few have a structure in mind when it comes to problem solving. When asked as part of discussion how the scientific method fits in, it only occasionally seems to help them in discussion and rarely fits into their process for problem solving. Hmmm... What is going on?

    Someone else commented re reviewing papers. I would add reading the technical literature as well as recent articles discussing attempts to repeat technical studies in some areas. These all are informative as to the state of the practice of science. More open critique and discussion is definitely a sign that the problem is recognized. Perhaps even more of this will help change the direction. Thanks all for this discussion.
    Robert Seitz, Freeport-McMoRan, USA

    ReplyDelete
    Replies
    1. There are two good reference articles on data and analysis practices for broad consideration:
      1. Ten simple rules for effective statistical practice
      2. 10 simple rules for the care and feeding of scientific data

      Ten simple rules for effective statistical practice
      http://www.stat.cmu.edu/~kass/papers/10rules.pdf
      1. Statistical methods should enable data to answer scientific questions. Shift perspective to asking and answering scientific questions.
      2. Signals always come with noise. Analysis must have aim of separating the two.
      3. Plan ahead, really ahead. Good design aids answering the question being investigated.
      4. Worry about data quality.
      5. Statistical analysis is more than a set of computations. Analytical methods need align with scientific questions.
      6. Keep it simple
      7. Provide assessments of variability. Measurements, Experiments, Process Response are all sources of variability. It is essential to provide some notion of the sources of observed variation.
      8. Check your assumptions. Statistical inference involves assumptions, which should be clearly identified.
      9. When possible, replicate!
      10. Make your analysis reproducible

      10 simple rules for the care and feeding of scientific data
      https://www.authorea.com/users/3/articles/3410/_show_article
      1. Love your data, and help others love it too
      2. Share your data online, with a permanent identifier
      3. Conduct science with a particular level of reuse in mind
      4. Publish workflow as context
      5. Link your data to your publications as often as possible
      6. Publish your code (even the small bits)
      7. Say how you want to get credit
      8. Foster and use data repositories
      9. Reward colleagues who share their data properly
      10. Be a booster for data science

      Delete
  6. My dear Barry,
    Some how I did not see this which you say was put on 12 months ago.
    I really appreciate that you are bringing these things in such a transparent and blunt manner.
    Some times the problem is with the reasearch guides also;they try to guide boys from different disciplines on topics on which they themselves do not have indepth knowledge; they pick up these topics because there is money for those areas of work. Then most of the guides do not have time to read these articles prepared by students and send them off to journals.
    It is a real global problem.
    Thanks,
    Rao,T.C.

    ReplyDelete
  7. Thanks for raising this important point, Barry. It really does need addressing. In fact I don’t think it is hard to do, and some progress has been made, but there is much more to be done.

    As you say, bad science has several faces. Outright misconduct (fabricating results) is one thing, and we need to be constantly on the lookout for it, but the really insidious and much commoner issue is ignorance about the proper methods of collecting and analysing data, and making proper conclusions on the basis of a rigorous treatment of the data. My favourite example is the flotation community (and some of my best friends are paid-up members of that community!) who regularly report a couple of grade-recovery or kinetic curves obtained under different chemical conditions, or for different oretypes, and build an impressive edifice of conclusion on the often flawed assumption that the two curves are actually different, as they appear to be on the graph. Apparent differences may be no more than a reflection of experimental error. There should be no further excuses for that sort of behaviour because I showed people how to deal with the problem in the pages of your august journal a few years ago (Minerals Engineering, 34 (2012), 70-77), with full Excel solutions.

    Having said that, we are making progress. Things are better than they were 20 years ago, which is very heartening. For example, there are now some mining companies and suppliers that mandate that batch float testing is done in triplicate so that proper conclusions can be drawn on the basis of understanding the inherent experimental error.

    I don’t want to repeat all the points I made in the AusIMM article you kindly referred to in your comments. But it is important to emphasise that most numerate professions routinely utilise proper statistical methods in the design and analysis of experiments and data gathering. It is entirely uncontroversial. It is in their DNA. In the case of clinical trials of new medicines such methods are actually mandated by international government regulation; national health authorities will not licence a product unless they are satisfied that the proper statistical protocols have been followed.

    Contrast this with mineral processing experiments where there are no mandated protocols, everyone does what they want, and what happens is just a consequence of whether we are lucky enough to have an experimenter (in the lab or in the plant) who knows what they are doing. Sadly it sometimes happens that they don’t know what they are doing, and the results are there for all to see in the literature, and in non-optimal concentrator performance. Even some research supervisors and journal reviewers probably don’t know as much about these techniques as they should. So the checks and balances on which we all rely in the peer review process are themselves flawed.

    This is not a trivial matter, and although I acknowledge that I have something of an obsession about this stuff (it’s a personality defect), the brutal truth is that poor experimental method destroys shareholder value. We need to get this right, especially now with the industry in such a poor state. It cannot afford anything less than the best when it comes to collecting and analysing data. /cont
    Tim Napier-Munn, JKMRC, Australia

    ReplyDelete
    Replies
    1. It's notable that societies and journals in medical and biological areas tend to have explicit author / researcher guidelines around statistical reporting. Some examples:
      1. Oberg, A.L., et al., The process of continuous journal improvement: new author guidelines for statistical and analytical reporting in VACCINE, Vaccine, 30, 2012, 2915-2917.
      2. Altman, D.G., Poor-quality medical research, JAMA, June 5, 2002, 287(21), 2765-2767.
      etc., etc.
      Searching engineering journals & such recommendations / guidelines are not evident. The question must be - Why not?

      Delete
  8. /cont
    I think that as a community we need to do three things (which I outlined in the AusIMM article):

    1. Stats should not be taught in first year engineering (other than the basic maths as part of engineering maths) but in final year, integrated into a final year (research) project and preferably taught by an engineer, because the key thing is for it to be taught in context with relevant numerical examples, not as a maths subject. The good old RSM did exactly this when I was there (final year 1969/70) and I credit that with my early interest in the subject.
    2. We need to ensure that those conducting peer review know what questions to ask so that over time prospective authors get to know that they won’t be published unless they have followed the correct approach (this is liberating by the way, not limiting!).
    3. Where serious economic decisions are being made, eg in doing large scale plant trials, we should have experimental protocols in place, agreed by all parties (eg mining companies, reagent suppliers and equipment vendors), to ensure that the experiments are properly done and analysed. It is a similar idea to the JORC code which the geologists devised when the investment community got tired of charlatans trying to pass off dud deposits as worthy of investors’ hard earned cash. JORC is now used around the world, and investors will not invest unless the reserves are characterised according to JORC. To me this is exactly analogous to saying that we won’t accept the result of a plant trial unless it has been conducted and analysed according to agreed protocols. It is not hard to do this.

    Comments would be very welcome.

    Tim Napier-Munn, JKMRC, Australia

    ReplyDelete
    Replies
    1. It has surprised me in recent years to see how many engineering degrees do not require statistics classes as part of overall curriculum:
      1. class available as option & many students appear to opt away from maths?!
      2. mindset of embedding stats into other classes, but the philosophy / thinking is missed with pure focus on use of the tools.

      Delete
  9. I am with you on this one. I might have become notorious at our (Process Engineering, Stellenbosch) internal post-graduate presentation sessions for challenging presenters on their experimental design, repeatability, statistical significance, confidence intervals of results where their own experimental data are involved.

    It should be said that Process Engineering at Stellenbosch University does include a thorough course in experimental design as part of our undergrad curriculum (Second Year, Second Semester: Chemical Engineering D244 – Experimental design). Our engineering programme is audited and approved by the Engineering Council of South Africa (ECSA) on a regular basis.

    Yet, I must agree that the onus remains upon the student/researcher to apply proper experimental design, and upon the supervisor/examiners/reviewers to check for proper experimental design in all submitted works, as applicable.

    ReplyDelete
  10. My first introduction to experimental design and statistical analysis was in 1962 through Topping's text book entitled Errors of Observation and their Treatment. This was when I first understood the difference between systematic and random errors. This little gem was first published back in 1955, but is still available today as an e­book (fourth edition published in 1972). This book was published in the days before personal computers and it was a major challenge to apply some of the advanced classical parametric tests that were available at the time. My personal breakthrough in the application of statistical techniques was in the 1990's with the resurgence of resampling methods using Monte Carlo simulation techniques, coupled with experimental design tools that allow for interaction effects and the visualisation of response surfaces. Certainly, Professor Tim Napier­Munn's new text book, which covers resampling techniques, is a blessing for our young minerals processing engineers and scientists (and not so young) wanting to achieve high levels of competence with regard to experimental design and the statistical treatment of errors.

    Another related hobby-horse of mine is that of sampling, especially with regard to obtaining steady-state estimates of mass balances around pilot or production scale circuits. Proper sampling procedures have been spelt out again and again in the literature: to get a representative sample you must totally cut the stream to be sampled using a cutter with a prescribed geometry and cutter speed. It is also known that if you obey these simple rules you can get an unbiased estimate of the mass flow of the stream being sampled. This allows you to check the calibration of mass flow instrumentation (such as magnetic flow meters and radiometric density gauges) that are prone to systematic errors. In spite of proper sampling procedures being widely published (championed by Pierre Gy over many decades until his death in November last year), I've yet to find an operating plant where the basic rules of sampling are obeyed fully. Unfortunately, plant operators and researchers persist in taking samples manually with cutters that have an imprecise geometry and with no concern for cutter velocity. Moreover most production plants are designed such that the keys streams required to be sampled, to obtain a complete circuit mass balance, are impossible or dangerous to access properly.

    Adrian Hinde, AH Consulting, South Africa

    ReplyDelete
  11. Barry, for many years I have reviewed manuscripts containing bad science, poor (often deliberate ignorance of) literature knowledge, and other behaviours of an unprofessional nature. The last includes plagiarism, misrepresentation of the work ofothers, repeating previous work and claiming "new" or "first study of", etc. Conferences these days are rife with wheel reinvention. The sad thing is that many of these manuscripts end up published. The authors rely on overloaded or lazy reviewers and editors, or aim at journals on the borderline of the field where knowledgeable reviewers/editors are scarce.
    The result of this is stagnation of the science and ensuing technological exploitation. The latter being economically vital in minerals processing/engineering.
    It rests on us all to spend the time on reviewing properly and, as editors, to ensure reviewer concerns are addressed.

    ReplyDelete
    Replies
    1. I agree Bill - the issue is not just about the statistical component of the experimental design - there are also elements related to basic understanding of the field. For example, in minerals studies, we now have access to well-established mineral characterisation techniques to identify and quantify mineral phases, to enable experimental results with 'mechanistic' validation alongside whatever statistical evidence is available. The underlying science, i.e. hypothesis testing of why certain behaviours are observed can then be determined. In many cases this now requires more than simplistic knowledge of a single mineral or mineral type (e.g. sulfides) but the gangue as well. As you point out, this requires effort that may be regarded as too time consuming.
      Glad to see this being discussed.
      Angus McFarlane.

      Delete
  12. Thanks to all of you for your comments which confirm that there is a lot of bad science about, and we should start to do something about it. Interestingly The Times reports another 'scientific study' today, this one on the effects of exercise. A study at the University of Cambridge has concluded that if you are bone idle and start exercising for 20 minutes a day, that will add about an hour to your life each time you do it. However, the benefits in terms of life expectancy then drop off dramatically, as the next 20 minutes of exercise you do each day only buys you an extra 15 minutes of life. Hmmm!

    ReplyDelete
  13. Besides plagiarism, badly designed experiments need to be closely watched and acted upon to achieve scientifically meritorious papers. I fully agree with the sentiments expressed in this regard.
    Reviewers need to be watchful and need to be serious in weeding out such poorly presented papers.
    On my part I will be vigilant in this regard.
    Regards,
    Prof. K. A. Natarajan, Department of Materials Engineering, Indian Institute of Science

    ReplyDelete
    Replies
    1. You always have been, which is why you are a trusted and valuable reviewer for Minerals Engineering. Many thanks

      Delete

If you have difficulty posting a comment, please email the comment to bwills@min-eng.com and I will submit on your behalf