Fire to the File Drawer: Sharing Reproducibility Data in an Online Age.

  1. 1.  Cardiff Metropolitan University

We’ve come a long way when it comes to sharing and reviewing newly published research in the past few years. The advent of services such as Altmetric (Altmetric.com, 2016a) has opened up a new window of insight into the impact of articles, and for better and worse it has allowed us to see discussions from every peer and media outlet that care to comment on a work. Yet in an age where data on anything and everything—from what’s wasting your phone’s battery, to the location and magnitude of every earthquake this week—is easily accessible in one quick search, it’s damning that in scientific circles finding quality replication attempts can be an arduous task. Outside of the saints that publish meta-analyses on that oh-so-specific topic you’re looking into, getting a good idea of the replicability of a study can be extremely time consuming.

We can do better than this. You’d be forgiven for reading that sentiment as a platitude, especially after the barrage of scalding thought pieces about the replicability crisis in recent years (Ed Yong, 2016), many including that exact phrase. It is instead a statement of fact. In official forums, such as academic journals, the scientific community has become too introspective with regards to how it handles publication and the wider evaluation of material, seeing the current model as the only way forward. Outside of this current model however, there are plenty of tools that could be adopted in order to refine existing review and publication infrastructure, and vastly improve the current reproducibility situation.

Although a traditional journal-style publication system offers much in the way of safeguarding through peer review, it also imposes limitations on what research sees the light of day in order to maintain a dynamic, interesting catalogue of articles, and as a result retain commercial viability. (Ed Yong, 2012) With this dynamic, positive results are favoured considerably, three times more likely to be published than a negative result. (Dickersin et al., 1987) When it comes to replications, it’s clear that while this issue still prevalent, attempts to truly evaluate scientific works through the publication of replication studies in existing journals will fall flat, purely through lack of representation of negative results. In the wake of increasing calls for a more efficient way to double check the work of ourselves and our peers, the emphasis should be on eliminating this ‘file drawer phenomenon’. (Rosenthal, 1979) The reproducibility crisis is nuanced and wide reaching, but tackling this must be the establishing step in the right direction.

Defeating the file drawer is no small obstacle. Even if replications were made a point of emphasis, to simply push every replication that may have been wrongfully filed away to the forefront of journals in a giant wave would be an extremely inelegant—and unlikely–solution. When the alternative is new ideas and studies, researchers don’t want to have to sift through replication studies clogging their regular literature search platforms for material that pushes forward their topics of choice, however vital they are to reproducibility and discourse.

Because of this, and without a place likely to accept a submission of a failed replication, scientists are discouraged from officially sharing any replication that they may attempt internally. (Peplow, 2014) Though quality attempts at replication are often attempted as a precursor to further exploration, there is no incentive to publish these, as any such articles are likely to meet total disinterest from journals. The scientific community, therefore, cannot rely on traditional journals alone to remedy this particular problem, as the file drawer is inexorably tied to their current publishing models.

This does not mean however, that replication studies or in depth evaluations of methods are never discussed. Even though sharing replications through journals is often not an option, academics still pick apart articles of interest both in labs around the world, and better yet, on each of our online doorsteps. For example, the use of social networking sites by academics from around the globe has given rise to several extremely popular hubs for article discussion, such as Neuroskeptic (DiscoverMagazine.com, 2016), with hundreds if not thousands more sharing and discussing articles (admittedly to varying depths) between colleagues and friends through personal twitters and blogs. This is rightfully recognised within these spheres as not only hugely useful to research, but an opportunity for community engagement.

It is entirely within the realm of possibility that the creation of a new publishing platform, focused on hosting formal replications, alongside these review style evaluations of method, would provide a new and more focused home for the type of discussion. In short, such a space could become a hub where information on reproducibility is easily accessible, and where the networking and social value of discourse and participation drives content, as observed in current social media research engagement. This would enable us to also incorporate into this publishing platform the most valuable aspect of traditional publishing, the peer review system, in a manner not driven by commercial interest, but instead by the quality of replications and evaluations submitted. This would provide a file drawer-free pathway to ensuring reproducibility data made its way to the surface.

The information hosted on this new model publishing platform can then be made readily available to those viewing the original article on its native website in a manner similar to Altmetric. (Altmetric.com, 2016b) That is, an aggregation of positive, negative and inconclusive replications and evaluations, formally visualised it in an easily read widget with further opportunity to click through and analyse these sources directly, would be included alongside the original publication.

Overall, implementing such a system would vastly improve the accessibility of research; both through providing links to peer reviewed replications which have not been filtered by the file drawer, and literally, in terms enabling an overview replication information out at a glance.  

Later financial incentivisation could be devised to encourage prompt replication and evaluation of possible breakthrough studies, we require a solid foundation to build from. Our reproducibility crisis is complex, and though not every aspect can be addressed through this suggestion alone, I believe that a new model platform such as the one described here would provide said foundation, with ample scope for further adaptation.

References

Altmetric.com. (2016a). Altmetric.com. Retrieved 10 June, 2016, from https://www.altmetric.com/

Altmetric.com. (2016b). The Donut and Altmetric Attention Score. Retrieved 10 June, 2016, from https://www.altmetric.com/about-altmetrics/the-donut-and-score/

Dickersin, K.; Chan, S.; Chalmers, T. C.; et al. (1987). Publication bias and clinical trials. Controlled Clinical Trials. 8(4), 343–353.

DiscoverMagazine.com. (2016). Neuroskeptic. Retrieved 10 June, 2016, from http://blogs.discovermagazine.com/neuroskeptic/

Ed Yong. (2012). Replication studies: Bad Copy. Nature. 485, 298–300.

Ed Yong. (2016). Psychology’s replication crisis can’t be wished away. Retrieved 10 June, 2016, from http://www.theatlantic.com/science/archive/2016/03/psychologys-replication-crisis-cant\n-be-wished-away/472272/

Peplow, M. (2014). Social sciences suffer from severe publication bias. Nature. doi:10.1038/nature.2014.15787

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin, 86(3), 638.

Reviews

Showing 1 Reviews

  • Placeholder
    Arin Basu
    Confidence in paper
    Quality of writing
    Originality of work
    1

    'It is entirely within the realm of possibility that the creation of a new publishing platform, focused on hosting formal replications, alongside these review style evaluations of method, would provide a new and more focused home for the type of discussion. In short, such a space could become a hub where information on reproducibility is easily accessible, and where the networking and social value of discourse and participation drives content, as observed in current social media research engagement.'

    This is not only possible, but actually needed. This will solve a major problem we are currently facing in terms of having an outlet for replication studies that are then in turn peer reviewed and archived. However, this is just one way of dealing with the larger problem of file drawer bias, the core issue still remains to be addressed: as to how do we get smaller studies with equivocal findings that are nevertheless "clean" studies in dealing with bias and confounding variables, into public domain and make them accessible to the interested scientist and reader community? Perhaps an additional layer of acceptable and peer reviewer-ship would be needed for these studies and replication studies. 

    Is the style of journal the most appropriate format? This could lead to new debate, as the traditional ways journals display information is clearly incompatible with the new developments in web based reading. While maintaining all the core functions (show data in the form of tables and charts, clearly mark attributions in the form of citations and referencing, and allow for a 'permalink' that is at once indexed by the academic search engines) should be sine qua non of such a publication effort, there is also a space to discuss what format might be preferred for this. 

    In summary, would very much like to see a follow up on this thought and perhaps a prototype that we, the scholarly community can move to the stage of an actual publication and reality. Time to roll up the sleeves. 

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.