In what area of scholarship are repeated replications of always the same experiment every time published and then received with surprise, only to immediately be completely ignored until the next study? Point in case from an area that ought to be relevant to almost every single scientist on the planet: research evaluation. The first graph I know to show the left-skewed distribution of citation data is from 1997:
PO Seglen, the author of above paper, concludes his analysis with the insight “the journal cannot in any way be taken as representative of the article”.
In our paper reviewing the evidence on journal rank, we count a total of six subsequent (and one prior, in 1992) publications that present the left-skewed nature of citation data in one way or another. In other words, this is an established and often-reproduced fact that citation data are left-skewed. This distribution of course entails that representing it by the arithmetic mean is a mistake that would make an undergraduate student fail their course. Adding to the already long list of replications is Nature Neuroscience, home of the most novel and surprising neuroscience with this ‘unexpected’ graph:
Only this time, the authors are not surprised, appropriately cite PO Seglen’s 1997 paper and acknowledge that of course this finding is nothing new: “reinforcing the point that a journal’s IF (an arithmetic mean) is almost useless as a predictor of the likely citations to any particular paper in that journal”. Kudos, Nature Neuroscience editors!
What puzzles me just as much as the authors and what prompted me to write this post is their last sentence:
Journal impact factors cannot be used to quantify the importance of individual papers or the credit due to their authors, and one of the minor mysteries of our time is why so many scientifically sophisticated people give so much credence to a procedure that is so obviously flawed.
In which other area of study does it take decades and countless replications before a basic fact is generally accepted? Could it be that we scientists, perhaps, are not as scientifically sophisticated as we’d like to see ourselves? Aren’t we, perhaps, equally dogmatic, lazy, stubborn and willfully ignorant as any other random person from the street? What does this persistent resistance to education say about the scientific community at large? Is this not an indictment of the gravest sort as to how the scientific community governs itself?
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.