Dogmatism and the Scientific Process: A Need for Change

Despite science being a beacon of innovation, invention and new ideas, the process of scientific publication has remained relatively unchanged for the past 250 years (Spier, 2002).  Since 1752, the peer-review method of science publication has involved scientists submitting their work to be reviewed by a small sample of their peers. Over the past 250 years, technology has expedited this process, but has left the underlying structure relatively unchanged. 

The fact that the peer-review process has remained so rigid and dogmatic would be reasonable if it were a generally flawless system. However, anyone who has engaged in this process is fully aware that it has many drawbacks, inefficiencies, and problems.  Many journal and academic articles have drawn attention to these problems and they do not need to be restated in depth here.  Suffice it to say, a mix of poorly aligned incentives, individual biases, and an incomplete process of peer-review has led to a file-drawer problem, p-hacking, replication difficulties, privatized publishing of public data, and a general inefficient system of scientific discovery. 

Efforts have been made to try and address these problems and have provided some good solutions to mitigate them (e.g. the open science framework, Sci-hub). While recommendations like those proposed by the OSF help to diminish the concerns, they still fail to ultimately curtail a problematic system that has barely been changed in over two centuries. 

A Brave New System

Recently, a number of researchers posted their scientific discoveries directly to the internet, prior to submitting it for publication (Harmon 2016).  These scientists argued that the process of publication is too slow and prevents rapid dissemination of information.  Indeed, many journals noted that they will not penalize scientists for releasing findings related to the Zika Virus.  But as some have pointed out: Why stop at the Zika Virus?

The ability to pre-publish one’s results is a small window into what is possible in the information age. However, my concerns are not simply about the speed of dissemination.  Rather, I argue that the ability to publish results quickly and relatively without cost to the internet provides the opportunity for a complete overhaul of the publication process.

As scientists know, publications become academic currency.  Publishing a paper, particularly in a top-tier journal, provides an avenue to employment, funding, and promotions.  Consequently, the incentive to publish becomes increasingly powerful and idioms like ‘Publish or Perish’ become norms.  Scientists working on projects that don’t provide confirmatory results are forced into a situation in which they have to either accept that the time and money they have put into their project were wasted or ‘massage’ the data until an ‘interesting’ result emerges. Indeed, this structure of incentives results in a positive-results bias, where only statistically significant effects are published; subsequently with a large percentage of them, failing to replicate (Open Science Collaboration, 2015; Begley & Ioannidis, 2015).  Researchers who refuse to conform to these standards are eventually weeded out in this highly competitive market. 

While top journals served and continue to serve as a proxy for high quality science, they also are biased towards findings that seem interesting and counterintuitive, but not too counterintuitive.  Despite what one might think, the data is rarely the focus. The main criteria becomes how well one can motivate their contribution and whether reviewers and editors deem the findings worthy of publication. Further, as research has demonstrated, it is difficult to know what research will eventually push new paradigms of scientific inquiry and many papers fail to gain recognition until several years after publication (Ke, Ferrara, Radicchi, & Flammini, 2015).  Subsequently, positive results often bounce around to a number of journals until they are eventually accepted (or disregarded), while replications and non-significant results are relegated to the file drawer.

Revising and Resubmitting the Scientific Process

Anyone who has ever taken a high school science class knows that the scientific method is about deriving and testing a hypothesis and then communicating the result, regardless of whether it confirms, falsifies, or fails to confirm one’s hypothesis. But this is not how the real scientific process unfolds; only positive results are communicated.  This is made more problematic due to post-hoc reasoning about certain unexpected results. A prominent social psychologist (Bem, 1987) has even explicitly advocated this view, noting: “There are two possible articles you can write: (a) the article you planned to write when you designed your study or (b) the article that makes the most sense now that you have seen the results. They are rarely the same, and the correct answer is (b).”

In order to address the problems associated with the current system, the argument here is to design a single database where every experiment is published.  In this new system, scientists are incentivized by each experiment they run, regardless of the result.  Because science builds off new discoveries, novel insights will still be desirable, but replications that confirm and more importantly fail to confirm these results will be given appropriate attention as well.

The system of peer-review can remain but its goal is now different.  The goal isn’t to be a gatekeeper of what gets published and what does not; but rather to safeguard that the data are consistent with the methods being used and the conclusions being drawn. It should focus, almost solely, on whether the methods and analyses were appropriate and the data confirms the arguments being made. If a researcher took a few attempts to set up the experiment properly, this too should be documented.  Outliers, missing data, and caveats should be openly reported, as should entire datasets.  Because all data will be published, there’s no incentive to obfuscate the results, p-hack, or utilize any other form of shady research practices.

A secondary goal is to make the paper better; if something is unclear it should be noted.  But instead of using clarity as a reason for rejection, reviewers can work with the researchers in a friendly back and forth to ensure that important points are made clearer.  Corrections can be streamlined as researchers and reviewers will no longer require detailing the issues in extensive letters back to the editors.

Little focus should be on the introduction and discussion.  That’s not to say that introductions and discussions are not needed.  Communicating hypotheses clearly and positioning them within the relevant literature can help inform others on how to scaffold the current findings within larger scientific knowledge. But how interesting these findings are should not be a method of criteria. Not only does focusing on the interest aspect negatively bias the scientific process; it also is time-consuming for the authors. A large amount of effort in preparing a manuscript is focused on setting up the introduction to ensure that it’s positioned as interestingly as possible. No research should be relegated to a file-drawer because some other researchers did not deem it captivating enough. Eliminating this as an important criterion and focusing on the data will only increase scientific efficiencies.

Because researchers are forced to set up their research so that it is interesting, novel, and statistically significant, most of the experiments that researchers perform end up never being published. Consequently, untrue ideas can be perpetuated for many years, while at the same time waste the time of other researchers who fail to replicate it.  Recently, a few extensive attempts to replicate the ego depletion effect, a foundational idea in recent social psychology research, failed to find a statistically significant effect (Hagger et al., 2016; Lurquin et al., 2016).  Given that the original paper has been cited over 3,000 times, if ego-depletion is false, it is distressing to think about the amount of wasted time and resources given to this idea. However, if the incentive structure in academia was changed and rewarded all attempts, untrue ideas would quickly be dismissed and researches could focus on new or more robust ones.

Changing the Status Quo

Twenty years ago, Forbes predicted that publishing would be the first causality of the internet age.  Instead, academic publishing has grown, with Elsevier, the largest firm, making over $25 billion in 2015 by privatizing and restricting the access to publically funded research (Schmitt, 2016).  Along with providing benefits to replicability and efficiency, this open system also has the added benefit of making research fully accessible to the general public.

Ultimately, the scientific process has been valuable but there is definite room for improvement.  As technology allows for better innovation, researchers owe it to themselves and the world to embrace it.  How to implement this change is still an open question, and this article serves as a first attempt to begin thinking about the problems inherent within academia’s incentive structure and peer-review process and to start working towards a solution. Implementing any new system will undoubtedly come with a unique set of problems, but as scientists, we should be open to these challenges and be motivated to solve them. 


References

Begley, C. G., & Ioannidis, J. P. (2015). Reproducibility in science improving the standard for basic and preclinical research. Circulation research116(1), 116-126.

 

Bern, D. J. (1987). Writing the empirical journal. The compleat academic: A practical guide for the beginning social scientist, 171.

           

Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A., …Zwienenberg, M. (In press). A multi-lab pre-registered replication of the egodepletion effect. Perspectives on Psychological Science,

 

Harmon (2016, March 15), Handful of Biologists went rogue and Published Directly to the Internet,  http://www.nytimes.com/2016/03/16/science/asap-bio-biologists-published-to-the-interne\nt.html?_r=0

 

Ke, Q., Ferrara, E., Radicchi, F., & Flammini, A. (2015). Defining and identifying Sleeping Beauties in science. Proceedings of the National Academy of Sciences112(24), 7426-7431.

 

Lurquin, J. H., Michaelson, L. E., Barker, J. E., Gustavson, D. E., von Bastian, C. C., Carruth, N. P., & Miyake, A. (2016). No Evidence of the Ego-Depletion Effect across Task Characteristics and Individual Differences: A Pre-Registered Study. PloS one, 11(2), e0147770.

 

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science349(6251), aac4716.

 

Schmitt, (2016, January 4). Can’t Disrupt This: Elsevier and the 25.2 Billion Dollar A Year Academic Publishing Business https://sasconfidential.com/2016/01/04/elsevier-25-billion/

 

Spier, R. (2002). The history of the peer-review process. TRENDS in Biotechnology, 20(8), 357-358.

 

 

 

 

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.