Sunday 4 August 2024

Forty-four mineral processors in top 0.5% of all scholars worldwide. But how seriously should we take these ranking lists?

This is an update to the posting of 9th May where four mineral processors were listed in the top 0.05% of all scholars worldwide, the so-called Highly Ranked scientists who are the most productive in terms of number of publications and whose works are of profound impact (citations) and of utmost quality (h-index), and have ScholarGPS ranks of 0.05% or better. 

ScholarGPS is a California-based company that applies artificial intelligence, data mining, machine learning, and other data science techniques to its massive database of over 200 million publications and 3 billion citations to rank over 30 million scholars worldwide. The data used to rank the Scholars weight each publication and citation by the number of authors, excluding self-citations. Using AI removes the possibility of bias often associated with human assessments.

ScholarGPS has now published a wider list of Top Scholars by Expertise, the top 0.5% ranked scholars, and 44 mineral processors are in this list, although not all of the affiliations are accurate! 

Congratulations to all concerned, but how seriously should we take these rankings? An article in The Times 3 days ago highlighted the online profile of Larry Richardson, a young mathematician with significant potential. According to Google Scholar, a website widely used to evaluate academics for jobs and promotions, he had produced a dozen research papers in the past four years, which had been cited by his peers scores of times. His career appeared to be blossoming. 

There was just one issue: Larry was a cat! In fact, he was the most highly cited feline in the world of academia, thanks to an experiment designed to expose long-standing flaws in how researchers are ranked, the fake profile exposing flaws in the Google Scholar website that allows scientists to fraudulently boost their credibility.

The story began when Reese Richardson, a PhD candidate at Northwestern University in the US, and Nick Wise, a research associate at the University of Cambridge, spotted services being advertised on Facebook that promised to boost the careers of scientists by fraudulently inflating their standing on Google Scholar. They worked out that these  fraudsters were using easily available software to generate sham research papers, which were then uploaded to ResearchGate, a well known social media platform for academics. 

These papers consisted almost entirely of nonsense, but they also included citations of studies purportedly produced by the fraudsters’ customers. When Richardson and Wise realised that Google Scholar was recognising these bogus papers and citations as legitimate, they marvelled at how easy it seemed to be to game the system. It might even be possible, speculated Wise, to transform a family pet into a rising academic star.

So, what do you think of ranking lists? Should we take them seriously?

4 comments:

  1. Interesting question Barry. One might say that wherever humans are involved, the potential for fraud exists. For example, I am on the receiving end of spam phone calls virtually every day. However, as Abraham Lincoln stated: “You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time.” While research measurement apparently can be gamed, I doubt that those at the top of the heap by such means can escape detection for long: the people involved will be known, and the quality of their work when scrutinized should speak for itself.

    ReplyDelete
    Replies
    1. Thanks Franklin. While I can see how Larry the Cat made it via Google Scholar I doubt if the ScholarGPS list could be as easily manipulated, as it is basically life-time achievement and as such is the AI evaluation of hundreds of publications and citations for each individual. Many of these publications and citations go well back into last century, which explains why there are no young people on the list. Note also that all 44 mineral processors in the list are male, reflecting on how few women were in mining only a few decades ago.

      Delete
  2. I played around with Scholar GPS this afternoon Barry, using different filters on test individuals in the health sciences, each of whom showed up as several different people, with distinct academic institutional affiliations, with differing scores and no recognition for a larger body of work accomplished while working in institutions not considered primarily academia (not listed by Scholar GPS) but that actually contribute a lot to global health science scholarship. My sense is that this algorithm based ranking system is likely more accurate for academically based people who have not moved around much, and whose research focus is tightly defined. I then did a quick search for comparisons between Google Scholar and ResearchGate. Now paraphrasing from the source below: both use an automatic crawling algorithm that extracts bibliographic data, citations and other information about scholarly articles from various sources. However, the two platforms often show different publication and citation data for the same institutions, journals and authors. The results indicate that there are high differences in publication counts and citations for the same authors in the two platforms, with Google Scholar having higher counts for a vast majority of the cases. "The coverage policy, indexing errors, author attribution mechanism and strategy to deal with predatory publishing are found to be the main probable reasons for the differences in the two platforms". I personally think that overall, all these systems (and there are others e.g., Academia.edu) are works in progress and subject to many sources of error, including fraud and abuse. As noted earlier, within a given field, the people involved will be known, and the quality of their work when scrutinized should speak for itself. Reference: Singh VK et al ResearchGate and Google Scholar: How much do they differ in publications, citations and different metrics and why? ResearchGate May 2021. /www.researchgate.net/publication/351990976

    ReplyDelete
    Replies
    1. Interesting analysis, Franklin. It would be good to know what others feel about these ranking systems

      Delete

If you have difficulty posting a comment, please email the comment to bwills@min-eng.com and I will submit on your behalf