Leveraging Doctoral Requirements to Promote Reproducibility

  1. 1.  École polytechnique fédérale de Lausanne
  2. 2.  Swiss National Science Foundation

It is commonly accepted that there is a crisis of reproducibility in science [1].  But how can this trend be reversed when bibliometrics such as the h-index tend to encourage quantity and incremental research [2], often at the expense of quality?

In an ideal world, researchers would only publish when they have found a meaningful result, be it a new insight or a better way to solve a problem.  Authors would share all relevant information to make their work as reliable and reproducible as possible, promoting transparency and open data.  They would thereby be producing fewer papers, and those papers would represent higher quality science.  These studies must be externally confirmed as part of the scientific process, and so in this ideal world, every researcher would publish a reproducibility study of another researcher’s work between every original paper of their own.  Unfortunately, the current system rewards the researchers with the most and most cited original publications, and so until the system changes, it is unreasonable to expect researchers to change on their own.

One commonly discussed solution involves changing the publication process, encouraging the publication and discussion of pre-prints, or new concepts such as invited reproducibility papers [3,4].  These alternative publication concepts are an integral part of the solution, but they must be complemented by a drive to actually reproduce the research of others.  There is little motivation for profitable publishing companies to change their ways, and most researchers will continue to submit traditional articles to the journals considered to be the best in their field, with little regard toward open access and other novel ideas being pushed to change the publishing schema.  Instead, it is necessary to insert an external motivator into the system.  But how can such a check be funded and enforced?  One solution is to make it a part of the requirements to complete a PhD.

During the PhD, students should be formally required to rigorously reproduce an already-published piece of work, and these reproductions should be documented and published at venues created for this purpose.  Reproduction exercises are already regularly undertaken at the beginning of new research projects, in order to gain an understanding of topics relevant to the new research.  One such model already exists in many Masters’ theses, where an already-existing piece of science is applied in a new direction.  Depending on the work being reproduced, several smaller works would be involved.

The beauty of requiring formal reproducibility studies to be part of the PhD is that it does not require the whole system to suddenly shift, a goal that is unlikely at best.  It can be started at individual universities, even from individual departments, and grow from there.  It is already being done by most researchers at the start of their own research.  We are simply lacking the administrator-enforced requirement that these reproduction attempts be thoroughly documented and published.

Publishing Reproducibility Studies

Reproduction papers could be published in the journals where they originally appeared, in a special “reproducibility” section.  However, journals may not want to publish articles that expose failures that originally appeared in their pages, and traditional journals tend to only publish studies with positive results [5].  A better reproducibility publishing model would be more independent, open access, and with a structure in place for open discussion with the original authors, and so any unbiased open peer review platform such as arXiv or the Winnower should be sufficient.

As an attainable first step towards standardized scientific reproduction, the difficulties and successes encountered by PhD students in their attempts to reproduce scientific studies should be documented and posted on such a platform.

Special journals designed specifically for publishing reproducibility studies are likely to appear if there is a growing market, and if such journals were based on the arXiv overlay model [3], there would be little maintenance required.  Many models of self-sustaining journals have been proposed [6], where the author pays a moderate fee to submit, and reviewers work for free as is already standard.

These reproducibility studies will likely be shorter than the original papers on average, as they will focus on discussing the difficulties and successes of reproduction, and the original scientific content need not be fully re-explained.  The time required to review a reproducibility study should often be less than the time required to review an original research paper.

Positive Effects on Original Research Papers

There are many benefits that will come with large-scale scientific study reproduction.  Once the appearance of formal reproductions becomes more standard, irreproducible studies will be less acceptable, and researchers will write papers to be more reproducible from the beginning.  All dependent variables, parameters, and choices of metrics must be reported and explained, details which can be provided in supplementary material as necessary.  Researchers will be compelled to share much more of their data.  Researchers that use computational methods will be compelled to share their code.

Having several papers successfully reproduced will be a mark of respect.  Few people will want to reproduce inconsequential science, so it will be seen as a sign of obvious importance to have one’s research reproduced.  This would likely lead to the creation of a “reproducibility index” that describes how many of one’s studies have been successfully reproduced.  If bibliometrics must continue to be used, a high r-index will surely be seen as valuable.

Currently, there is little reward for creating reproducible science, even though this is a fundamental part of the definition of science.  When reproducibility studies start to become more standard, journals will have an incentive to publish original work that will prove to be more reliable later on.

Funding

Most reproducibility work can be performed under standard research projects, as an understanding of related related work is necessary to performing a novel study.  Since these scientific reproductions will result in scientific publications, this should be straightforward to justify.  This is especially true if hiring committees and funding agencies begin to appreciate these works as an important part of a complete scientific portfolio.

Hiring committees and funding agencies have begun claiming that they will place less weight on bibliometrics [7], but until we have equally easy and “reliable” measures of scientific success, these metrics will continue to be referenced.  Expecting research to be formally reproduced should result in fewer but higher quality publications.  Further, it will produce PhD graduates who will work under the expectation that their science will be scrutinized for its reliability in addition to its novelty.

It is not unreasonable to ask funding agencies to support funding schemes specifically aimed at reproducibility studies, both in the form of full projects, and also in the form of travel.  Researchers will sometimes have to travel to use specialized equipment, and it may often be beneficial to spend time with the original authors of a study once an initial attempt at reproduction has failed.

In addition, there is a vested interest from industry to know what science can reliably be used in applications.  It is likely that industry would be interested in funding reproducibility studies to promote knowledge and technology transfer.

Concerns

Only the most interesting studies, and the ones that are likely to lead to further research, will be reproduced.  There will be no attempt to reproduce the majority of scientific papers published, but this is an acceptable situation.  It will be a desired scientific achievement to have one’s work reproduced, and so good researchers will work to make their published papers as reproducible as possible.  When reproducibility becomes respected and expected, publishing papers with no chance of reproduction will be seen less valuable, similar to papers with no citations.

An obvious concern with putting the majority of the reproducibility effort on PhD students is that studies that require several years or expensive equipment to reproduce will not be addressed.  It is true that these larger studies will not be reproduced by individual PhD students, but as systems are put in place to cater to the output from the early adopters, reproduction will become more standard, and appropriate and respected venues for publishing will exist.  It will then become more standard for entire labs to invest time and effort into more extensive reproducibility studies.

This solution by no means addresses all problems related to the lack of reproducibility in scientific publications.  But it would make significant headway, and very importantly, it can be started at the achievable scale of individual departments and universities.  By simply making it a graduation requirement that all PhD candidates formally document and share their attempts at reproduction, it should be possible to grow a system where original research is more reliable from the start.

References:

[1]  M. Baker.  (2016)  “1,500 scientists lift the lid on reproducibility.”  Nature 533, 452-454.  doi:10.1038/533452a

[2]  D. Sarewitz.  (2016)   “The pressure to publish pushes down quality.”  Nature 533, 147.  doi:10.1038/533147a

[3]  E. Gibney.  (2016)  “Open journals that piggyback on arXiv gather momentum.” Nature 530, 117–118.  doi:10.1038/nature.2015.19102

[4]  H. Koers, R. Capone.  (2016)  “New article type verifies experimental reproducibility.”  Elsevier Connect.  https://www.elsevier.com/connect/new-article-type-verifies-experimental-reproducibilit\ny

[5]  A. Franco, N. Malhotra, G. Simonovits.  (2014)  “Publication bias in the social sciences: Unlocking the file drawer.” Science 345, 1502-1505.  doi:10.1126/science.1255484

[6]  V. Tracz, R. Lawrence.  (2016)  “Towards an open science publishing platform.”   F1000Research, 5:130.  doi:10.12688/f1000research.7968.1

[7]  “The San Francisco Declaration on Research Assessment.”  (2013)  http://www.ascb.org/dora/

 

Showing 1 Reviews

  • Placeholder
    Richard Price
    Originality of work
    Quality of writing
    Confidence in paper
    1

    Dear Anne,


    This is a cool idea. Are you envisaging that doctoral students would publish only successful reproduction attempts? Publishing failed reproduction attempts might be politically problematic for them. It would be cool to find a mechanism by which people do publish failed reproduction attempts, but I think your idea would work even if it was restricted to publishing successful reproduction attempts. 

    Richard

    This review has 1 comments. Click to view.
    • Placeholder
      Anne Jorstad

      Thanks Richard.

      I believe it is important to also share unsuccessful reproducibility studies, but I agree that any attempt to do this must be done with care.

      Ideally the original authors would already have been involved in discussions with the researchers attempting to reproduce their work, sharing whatever further information was necessary to complete the study under the same conditions as the original. Even when this is not the case, the original authors must be invited to provide input on the study before it is released publicly. And the final publication of all reproducibility studies should be on a platform that allows for public discussion.

      It is possible that if an unsuccessful study is presented by an early PhD student, that the original authors might be less defensive of their work and more willing to accept that the student simply didn't fully understand their original explanation (no need to debate who is at fault) and may be more willing to provide further explanation of their original work. And it might not look as bad to a hiring committee.

      If only successful reproductions are published, I fear that out of a desire to be published, the reproducers might be more likely to reproduce any inconsistencies that may have existed in the original study. And this moves away from the original goal of reproduction.

      All that said, I would absolutely support an effort that only publishes successful reproducibility studies (at least at first), if this is the most realistic way to get things moving.

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.