Reproducibility: does it really matter?

  1. 1.  1 P. O. Box 7, Miki-cho post office, Ikenobe 3011-2, Kagawa-ken, 761-0799, Japan

If science were perfect, there would be nothing left to explore or critique. So, irreproducible science serves two functions: to scientists, it represents an opportunity to improve or discover new aspects that were not apparent in the initial phase of discovery, while to science critics it represents the failures and flaws of scientists and the peer review and publishing systems.


Sometimes defining reproducibility makes sense by defining what is not reproducible, although the range of definitions can vary widely (Goodman et al. 2016). Since aspects of research or a published paper have an element of reproducibility, the term is used in its broadest sense in this essay. Something that fails is easier to distinguish, quantify, assess and judge than an object of perceived high quality. For example, a paper published in a highly indexed journal is considered by many in the scientific community as representing the upper echelon of scientific quality. At a relative level, one could say that all other research, perhaps in the eyes of those who have published in such journals, is of low quality. Thus, at a basic level, one needs to define a paper that has quality versus one that is reproducible. Does a scientist reproduce research that has no quality or intrinsic value, or does one consider that every paper published carries elements of value that are simply waiting to be discovered, and used (i.e., cited), and thus reproduced? Thus, the lack of reproducibility is associated with high quality and low quality work. Unless the factors that determine the soundness and integrity of research and publishing are resolved, the entire argument of what constitutes reproducible versus irreproducible science may be flawed.


One could argue that a topic that is popular will garner greater attention and use than one that is not. Thus, at a crass level, one could easily envision that research that seeks to find a cure for a medical disease, and thus directly benefiting society and humanity, would find wider resonance than a study investigating the patterning on a moth’s wing. Although this example is somewhat extreme, most would hands-down say that the former example is more important, and thus popular. Popular topics attract greater interest and thus more scientists which in turn attract greater funding. Greater funding incentivizes greater objectives, and invites the use of more complex tools and techniques that lend themselves to being more prone to error. Greater funding also skews research opportunities, the publishing playing field, and institutional investment. Consequently, bias towards a topic that is useful, or popular, while ignoring the potential of research that does not pose an immediate and visible usefulness, would also lend itself towards publishing bias, since editors and journals would be more receptive to, and thus publish, popular topics than more ephemeral ones. Hyperbolic claims made by sensationalist science and grandiose research, which tend to have large multi-author groups and grab pole spot in higher ranking journals, also involve a large level of human and financial investment. Such studies automatically attract greater readership, and hence greater scrutiny. The element of irreproducibility would thus be easier to screen in such papers. So, the system of rewards in place, at the level of institute, grants and publishing incentives, tends to favor research that is popular. Consequently, unpopular or poorly funded research might only find a venue in low ranking journals or journals of suspect academic quality, and tends to be ignored simply because it is not seen, or scrutinized, as much as popular research, but should not. The rise and expansion of open access (OA), the gradual acceptance of post-publication peer review (PPPR) as being an integral part of the publishing process, and the application of both tools to screen papers for irreproducible factors may have truly stimulated the reproducibility debate.


The issue of irreproducibility cannot be corrected until a new scaffold is added to the structures that initially contributed to this crisis. Therefore, discussion may be futile since the factors that drive science publishing are biased and irreproducibility and failure are built into the equation, except for PPPR, which is irreproducibility’s logical counter-balance. So, keys to reducing irreproducible science lie first in repairing the weaknesses and flaws that are borne within the laboratory and that exist within the current publishing model while holding those that produced it – scientists, editors and publishers – fully accountable. Lack of responsibility fuels irreproducibility. Here, institutionalized PPPR plays a central role. If irreproducible science represents a natural evolution of a constantly evolving state of scientific inquiry, or the product of an honest error or a deliberate (in the case of fraud) act of misleading the readership for personal or professional gain, then it can be characterized as being either: 1) erroneous but useful, 2) fraudulent but popular, 3) erroneous and useless, or 4) error-free and useful.


How did/does irreproducible science get published in the first place? The first weak link relates to an impervious peer review system that relies on the human factor to fact-check manuscripts prior to them being published. Human fallibility is at the core of irreproducible science. We can thus expect irreproducible science to continue to exist long into the future provided that the same peer review structure and associated system of rewards remain intact. Peer review is fallible at multiple levels (Teixeira da Silva and Dobránszki, 2015), even in elite journals where peer review is perceived as being near perfect. Is the number of manuscripts published by legitimate academic journals that screen out most irreproducible factors and by unscholarly journals that screen out none comparable? In the OA movement, the amount of unscholarly and poorly vetted science may begin to supersede the amount of traditionally vetted science. The end-user thus controls how much irreproducible science gets promulgated into the literature.


Until PPPR crept into the science publishing process, traditionally oligopolistic publishers that claimed rights to most published science were relatively impervious to the lack of reproducibility. PPPR is a balancing force, complementing biased, flawed, imperfect and permeable traditional peer review. This, together with the fact that the choices in OA venues and the freedom to self-publish and self-archive have increased, are making these very same publishing powers heed greater attention to the issue of “reproducibility” (sensu lato), simply because not paying attention would affect the success of their business models, and would threaten their very power in science publishing. So, even though irreproducibility might not be easily eliminated since it is built into current research and publishing structures and incentives, it represents a threat to the publishing economic model, by threatening pride and repute. Unintentionally, only now has reproducibility started to matter.


It is impossible to eliminate irreproducibility, as equally as it is impossible to create excellent and perfect research. Thus, robust ways to reduce irreproducibility are needed. Some possibilities include:

1.  Increasing the number of competent peer reviewers, and integrating a model that includes pre-submission peer review and PPPR. This model should be open to make authors, institutions, peers, editors and publishers fully accountable for what they approve for publication.

2.  Financial incentives (e.g., royalties) for authors and editors are needed. Insufficient or artificial (e.g., impact factors) motivation can spur or increase cheating, and thus irreproducibility.

3.  Effectively implemented penalties for misconduct and fraud that are retroactively implemented. This will reduce the incentive for non-academic or fraudulent behavior. PPPR can root out irreproducible and flawed science.

4.  Make all published science OA. Financial restrictions will spur greater scrutiny of research and selectivity. Selectively scrutinized science prior to publication will reduce the level of irreproducibility.

5.  Clarify the boundaries in definitions between bad, flawed, and irreproducible science, and negative results. Clearer definitions between boundaries will assist in separating the wheat from the chaff.


Science is flawed and the concept of “perfect science” is a misnomer. But what constitutes “good enough” or unflawed research? What is the threshold level of errors or flaws that would make a paper’s findings reproducible and allow it to remain in the literature versus having it retracted? The lack of reproducibility is built into science and into scientific discovery, and into the commercial model that exploits science’s imperfections to create newly imperfect science. Thus, as I see it, the issue of reproducibility will not be easily resolved in the near future.




Goodman, S.N., Fanelli, D., & Ioannidis, J.P.A. (2016) What does research reproducibility mean? Science Translational Medicine 8(341): 341ps12. doi: 10.1126/scitranslmed.aaf5027

Teixeira da Silva, J.A., & Dobránszki, J. (2015) Problems with traditional science publishing and finding a wider niche for post-publication peer review. Accountability in Research: Policies and Quality Assurance 22(1): 22-40. doi: 10.1080/08989621.2014.899909


Showing 2 Reviews

  • Kh web square
    Konrad Hinsen

    This analysis suffers from not distinguishing the many possible causes of irreproducibility clearly enough. Irreproducibility is a symptom that something is wrong. That something can be as bad as fraud or sloppy work, but it can also be a lack of sufficient understanding of the phenomena being studied. In the latter case, irreproducibility can be the starting point for further research that leads to a better understanding. One main problem in the current reproducibility crisis is the other end of the spectrum, in particular sloppy work done to get another paper out as soon as possible. Another main problem is insufficient documentation of much published research, which makes it impossible to learn something from an observed irreproducibility.


This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.