I'm Alberto Pepe, the guy challenging Elsevier. I'm Co-Founder of Authorea - a budding collaboration and publishing platform for academics to write, share and discuss research and scholarly articles for free and in real time. Ask Me Anything!

Abstract

Hello r/science! My name is Alberto Pepe; my co-founder, Nathan Jenkins and I created Authorea after we both grew disappointed by the slow, inefficient, and obsolete ways by which research papers are written and disseminated.

We believe:

  • in breaking the cycle of the individual in academia and squashing the collective obsession with academic rankings;
  • that open science and open scholarship will positively shape the future of academic publishing and increase/accelerate the rate of scientific discovery;
  • in researchers retaining ownership of their work and in the public having forever access to research for free (even if an experiment 'failed');
  • in the 'paper of the future'; whereby a static paper is no longer viewed as an acceptable form of submission, rather a digital-born format that allows the reader to access and interact with the most important elements of a researcher's work: the data and code.

I’ll be back at 1 pm EST (10 am PST, 6 pm UTC) to answer your questions, ask me anything!

Edit 1: Online now! I am going to start answering questions now. 40 comments already!

Edit 2: It's now 3PM ET. I will keep answering for one more hour, then off for a jog. (54 questions)

Edit 3: 3.45PM ET. I am going to take a break, go for a jog, dinner, etc. I will be back at this later tonight. I only managed to answer ~15 out of 60+ questions. There are some amazing ones I want to answer later.

Hi Alberto.

I've seen a couple of proposal like yours and I'm always left with a question. Currently there are just too many papers published, the only way to even know what's going on it's to use automatic algorithms that decide for you what titles might be relevant for you (i.e.: the Google thingy). Aren't proposals to make publishing much faster and reviewed afterwards by readersonly going to make things much, much worse?

lucaxx85

The ever-increasing scale and size of knowledge output is something we need to deal with no matter what, I think. I believe it goes beyond scientific publishing - the same has happened to all domains of knowledge creation, including fiction (self-publishing eBooks), film, photo, and audio (eg. Youtube), news (social media as blogs and twitter). As the barriers of entry to publishing become ever lower, we have to find new ways of organizing and recognizing valuable information.

It is no accident that the early players who managed to organize the free dissemination of information on planetary scale are now some of the highest valued companies in the world. As science becomes more data-driven and computational in nature, scientific work, and hence literature, methods and datasets (as well as collaborations!) will keep increasing both in frequency and size.

In my opinion, we shouldn't fear this acceleration of the process of human discovery, and we shouldn't see it as an overwhelming development. Rather, we should be humbled by the progress we are able to make and become ever better at summarizing and organizing the growing size of the body of scientific knowledge. There are entire disciplines dedicated to studying these challenges - Knowledge Management, Information Theory and Retrieval, Data Science, Information Seeking Behavior, and so on (the field of Information Science, broadly). A promising new ecosystem of tools and initiatives is emerging, with the aim to build information solutions for scientific management. I am happy that Authorea is part of that ecosystem.


Hi Alberto,

I'm interested in understanding where the quality-control is going to come from. The current quality control system is broken--an overworked editor and 2-3 reviewers who all have shit to do other than volunteer their time to nitpick a paper skim it and write the first thing that comes to mind--but I'd like to know if you have a better system for quality assurance of research in mind.

There are thousands of manuscripts published each day. And that doesn't account for the millions of null-result papers sitting in desk drawers, or the millions of papers with less-than-rigorous research methods. How does a system like this assure readers of the quality of the work they are accessing?

captainpotty

Hey captainpotty- Authorea currently has a very "light" review system, which means that you can simply comment on specific sections of a paper. We built this commenting/annotation system mostly for authors so that they could discuss with their coauthors (privately or publicly). One day, however, we noticed that an Authorea user who had just finished the final draft of a climate science paper, before sending it to a journal, posted the link to it on Twitter, saying: "Now open for comments". He and his coauthors collected 60+ public comments (plus many other private ones) from the public (signed in authors as well as non-signed in anonymous). That was an example of open pre-publication peer review, or: you let anyone review your work before it is even submitted to a journal. The authors told us that they the final manuscript they ended up submitting was a lot stronger because it had already been reviewed! (and by more than 2-3 reviewers)

I understand that a major concern for this sort of system would be assuring the highest quality of the review mechanism. It does sound scary to open up the scholarly reviewing mechanism - which is by and large the filter that separates science from non-science. The truth is: the peer review mechanism is far from perfect. It has many many many problems today. What we propose (not today, but in the near future) is a "karma" system" (similar to reddit) whereby your contribution and reputation as an open peer-reviewer can be voted upon. I understand that such a system could be gamed. It is not without problems, but if it existed in addition to the current (problematic) closed peer review system, I think it would overall be a more rigorous and better reviewing system.


The nearly-non-existant peer review process that many open science journals perform doesn't seem to do much to keep out trash science.

How do you think these platforms can evolve to limit the exposure and credibility given to pseudoscience? These papers can be particularly damaging when "research" driven by a politcal agenda is published (along the lines of the racist pseudoscience "journal" Open Behavioral Genetics). It can be very difficult for lay audiences to differentiate quality research from intellectually dishonest, agenda-driven pseudoscience in the absense of careful expert review. How does your platform work to manage this risk?

p1percub

Hi p1percub. This is a great question, and one of definite validity. It is also a hard question to answer and while I don't have the full solution, there are some aspects that seem already clear imho on how we can build a better Peer Review 2.0.

  1. Reviews need to have increased transparency. Not necessarily by moving away from "blind" reviews, but at the very least the review's text needs to be made part of the accepted proceedings. In other words, peers should be rewarded for their reviews. In a system that works properly, peer reviews are just as important as the reviewed papers.

  2. Post-proceeding reviews may allow for the retroactive up- or downgrade correction of inaccurate proceeding reviews. Since we love to say hindsight is 20-20, the publishing process should allow to explicitly annotate papers with newly acquired knowledge years after they are accepted. We think that this is not unscientific, in fact it is the most scientific thing there is. We recently wrote a post listing 11 courageous retractions in science. Overall, a politically motivated paper may slip by the proceedings for a brief time, but once under public scrutiny it should be downgraded or even retracted.

  3. A key idea that may bring a lot of merit is "karma". The exact details are quite tricky (and extremely interesting) and I briefly (very briefly) discussed them in another answer to captainpotty. The gist is that: papers without reviews that agree on the result's merit with sufficient weight will not be treated as equals to strong scientific results.

  4. Healthy incentives for reviewers are another aspect to both increasing review quality and boosting transparency. If reviewers can receive academic merit for well-reasoned and insightful reviews, they will devote extra effort and welcome extra exposure of their reviews' writing. Our friends at Publons are working exactly on this. It's a great start.

  5. Move away from quota-based publishing. With the exception of the rare cases where the work is clearly unscientific or the methods are completely unreliable, the review process should always aim to improve the submitted work to a fully acceptable manuscript. There are already many venues without deadlines where submission can arrive at any point, which is great. Additionally, we should stop treating submissions that are off-topic for a venue as "rejects" but instead think of them as "redirects" and automate the resubmission flow into a sibling venue on a similar topic.

In short, we need to build on the established classic peer-review process, combine it with the best practices of large scale question-answer services with karma such as Quora, StackOverflow and Reddit, and adapt it to a digital reality that isn't constrained by the price and scarcity of ink and paper.

Note: At Authorea, we currently offer neither peer-review, nor submission capabilities, and think of ourselves as first a writing platform and second a pre-print server for science. But we have thought long and hard on the subject of peer-review, and may make steps in that direction in the future. Already today, we offer one of the most advanced automations for resubmission via our Export feature, which allows restyling a manuscript to a journal's requirements in a matter of seconds.

Thanks for your question.


  1. How would you monetize your alternative system? Elsevier and cie are extremely profitable because of the ridiculous subscriptions that universities and research centers pay for. But even free alternatives like PLOS had to be monetized to pay for servers, editing, and website management. They chose to do so by making researchers pay to publish. Would your system do the same?

  2. How would you measure the impact of a "paper" (or whatever that non-static publication would be called) without any ranking system? Indexes are questionable because of how they only rely on citations to measure impact, but they serve a purpose and a need and cannot be removed without alternatives.

pink_ego_box

  1. Our business model is a Freemium model whereby Authorea is free to use as long as you produce public content. You can write as many documents as you like on Authorea for free as long as you keep them "open". If you want to create private documents, you pay for private hosting. By using this model, we are encouraging authors to do "open research". Please note that Authorea does not have copyright on the content. All content produced is the ownership of authors. An author can delete, download, modify content at any point in time. So, in that sense, we are very different from traditional publishers (including PLOS, which is basically a traditional publisher). For the time being, we plan to keep charging scholarly authors, who pay because they are using a collaborative tool writing tool built exactly for them. We are getting increasing interest from institutions, departments, research labs who want to buy group licenses.

  2. As I said in a couple of other answers, I am not advocating for "no ranking system". However, scholars' contributions should be assessed across more components than simply number of citations. I suggest, in another answer, "number of forks" as a way to assess the impact of a scholar in "giving birth" to new research results. In other words, if a piece of code you wrote in, say, a genomics paper, is readapted by astronomers to bring about an amazing new discovery in, say, exoplanetary science, we should be able to reward you, and visualize the provenance.


Hi Alberto,

I think what you're doing is great; I would love to see a shift in how academic literature is shared and I think we are long overdue for the modernization that has already been readily accepted in other fields.

Unfortunately, I find that these academic journals all appear to suffer from low impact factors currently. What would you say to a budding scientist--who's career depends upon publishing in "high quality" journals--that would encourage them to publish in your journal?

Thanks!

Jenna_bird

Hi Jenna- I understand your concern: no matter how open and innovative you may want to be, at the end of the day, as a scientist, you are required to publish in "high quality" journals, because that is the only way to get tenure, grants, and recognition in your community. I think that in the next few years we will see new metrics and methods to assess a scientist' contribution (metrics that go way beyond a journal's impact factor and/or number of citations). But, while we wait for that to happen, what can you do TODAY? Two things come to mind:

(1) deposit a pre-print or post-print of your article in an institutional, disciplinary (or any other) repository that is indexed by scholarly search engines. A pre-print is the version immediately prior to what is published in a journal (and post-print immediately after). Even if you lose your copyright on the published version, the pre-print and post-print versions are YOURS. By depositing an open version of your work, you are giving the entire world open access to your work (and you and your work also become more visible!). If you write an article on Authorea, all you have to do is make it public and it will be a pre-print!

(2) if you have datasets and code associated with your work, publish them with the paper. Publishing the data and code behind your papers/images make your work more likely to be reproduced (and again, they make you and your work more visible!). Most journals today do not allow you to deposit data and code with your paper. Putting a link to a dataset in your paper is not enough because those links die with time. There are tools like Figshare, Zenodo and Dataverse that allow you to deposit data/code and get a DOI for it. In Authorea, you can take even a step further and include the data and code inside your paper. My dream is that the paper of the future will make data and code first class citizens.


What motivated you to undertake this, was there a particular event involved?

What sort of response have you received from elder academics who already struggle with digital formats?

nate

Hey Nate- thanks for the question. I hope your Ph.D. is going well! Authorea was born out of frustration for the way science is written, disseminated and published. I met my co-founder, also a physicist while I was working at CERN. We were both writing research papers and hating the process. To be precise, we were loving doing research (collecting and crunching data, writing code, etc) but the moment we had to start writing a paper we realized that there was no tool out there that enabled collaborative writing in a way that made sense to us. While having dinner with Nate (years after CERN) in a cool Neapolitan pizza joint, I shared my idea and frustration with Nate and he said "Damn! Why isn't there a Github for scientific papers?". That was roughly the beginning of Authorea. It was also a good pizza.

It is no secret that Authorea is more appealing to younger scholars: 25 to 45 years old. (I am hoping 45 is considered "young"). We have elder academics using Authorea too and they tend to fall in two categories: 1) "cool" professors and researchers who are always at the forefront of technology, 2) scholars who are curious to know what the younger generation of scientists are doing. Our platform is quite simple to use and it is getting even better (very very soon, it is going to be really awesome). We are here to facilitate writing needs of all researchers. Even if the learning curve is steeper for some, we're happy to take the time to help anyone who is willing to give us a try.


Napster and other file sharing services helped revolutionize the music industry, which gave birth to iTunes (and 99-cent songs). Do you think that your organization (and Sci-Hub) will do something similar to the scientific publishing industry?

vilnius2013

Hi Vilnius2013. I am fascinated by the rise in popularity of Sci-Hub and, in a way, I support it because it is bringing scholarly and scientific knowledge to people that would not otherwise access to it. At the same time, I think that it is absolutely inconceivable that in year 2016 we need to resort to the dark web and illegal mechanisms to disseminate scholarly content which should inherently be public, open, and free. What is even more irritating is that we have a solution already, today, to freely and publicly disseminate scholarly knowledge. The solution is called preprints. Astronomers and physicists have been doing it since before the web and net existed (libraries would print out new unpublished manuscripts and they would send them around by mail to partner libraries). As I said in another answer, a pre-print is the version immediately prior to what is published in a journal (and post-print immediately after). The pre-print version of an article is ownership of the author and can be deposited in a repository (or if an article is written in Authorea it is already there). So, my question is: why exchange publisher-formatted (illegal) PDFs on a peer to peer network when we can exchange those same articles, in html format, and legally?


Hi Alberto, I applaud your efforts to improve scientific publishing. But, how do you intend to stop the obsession with rankings and publishing? Like it or not, such things are how hires/promotions are made.

How will your service show how the work has improved over time, and why would I choose to continue adding to an already published work using your service when I could perhaps publish another paper? What metrics would your service offer to encourage people to add to existing works rather than publish many smaller works?

Cheers!

greptomaniac

Hi you make a very valid point. As I said in another answer, I think that in the next few years we will see new metrics and methods emerge, which will determine hires/promotions/grants based on other, more holistic aspects of a researcher's contribution. Today, citations are the main driver of a scholar's reputation: the more you get cited, the higher your standing (and of course, citations from top journals count more). My feeling is that in the future, re-using a scientific paper will be more important than citing a scientific paper. I think Richard from Academia said something similar a while back (but cannot find it). The idea is that reusing someone else's data, code, methods, analyses is the highest manner of recognizing their work ("Thanks to your data, I made some new amazing research"). Today, we have very little incentive to share research data and code. Authorea is an attempt at giving researcher that incentive. Authorea is built on Git and it allows for authors to include code and data *inside papers and for those papers to be "forked", the same way you fork a code repository on Github. What I envision is that in the near future, the number of "forks" for a scientific paper will be as important as (if not more important than) the number of citations.


With no restrictions on submissions you'll inevitably have more submissions. However research is only valuable if it's shared and read. This requires other scientists to find and to read it.

How do you intend researchers to find relevant papers to read when you currently have no keywords and no search functionality? There is also no doi assigning process currently in place.

lasserith

Hi there lasserith,

Your criticism is very well taken. We are trying to solve one problem at a time. Right now, we are focused on making collaborative writing as seamless as possible and then becoming a solid preprint repository where (data-driven, interactive) papers are hosted, discovered and read. We are extremely concentrated with the former - as you can see in our open product roadmap - but we do have plans to push on the latter in the new year. Specifically, we will soon allow users to tag their articles with keywords (and we will use that information to build "expertise profiles" for users). Moreover, we will implement our (very limited) [Browse Articles page]() with a powerful search engine system so that papers can actually be searched and found. Finally, we will be making our first step in the direction of publishing by allowing users to mint DOIs for their articles (this is in the product roadmap too). This is a natural move as a lot of content written on Authorea (essays, technical reports, blogs, gray literature) is not meant for publication but authors have requested that they need a way to cite it and refer to it.

I hope your graduate studies are going well!


What do you mean when you say you're challenging Elsevier? How? What exactly are they doing?

illpomhoye

It is a symbolic thing to say. To be precise, I should say that I am (we are, actually) challenging traditional scholarly publishing, but since the scientific publishing industry is dominated by a giant company - two companies, actually - which have been criticized by the Open Access/Science movement, I use "Elsevier" as a shorthand for "traditional scholarly publishing industry".


What are your thoughts on the role of impact factor? Does it have a good use, or is it something that you feel should be ignored?

We have run into some question here, because we use impact factor as an initial screen on submissions in /r/science. We set a threshold of 1.5, which allows most science through, while stopping some of the worst quality journals. We have had to do this because we frequently don't have mods available who are familiar enough with a specific sub discipline to determine if a journal is legitimate or fake. Is there a better way that we could apply a quick screen for legitimacy on journal articles, similar to how impact factor works?

kerovon

Hi kerovon, thanks for the question. The infamous Impact Factor is a measure of the frequency with which the average article in a journal has been cited in a particular period (e.g last 5 years). As such, it's an important metric in the publishers' world. Nature, Cell, Science, all have very high impact factors, and scholars are pushed to submit their 'best' research to these journals. In fact, having an article published in one of these prestigious journals can make the difference between getting tenure or not. This said, research published in a high impact factor journal is not necessarily 'better'. It is usually research that appears groundbreaking and potentially has a broader reach, but not always. Also, to make the paper fit the tight format of a high impact journal, scholars are usually forced to cut down important details, often making their science hard to reproduce. Conversely, an important finding that was not published in a high-impact factor journal might be overlooked, cited less and have smaller immediate reach on the community (which is busy reading the high impact factor journals). My impression is that open repositories (like the arXiv and Authorea) can allow a much more democratic assessment of the importance of a research article, without the need for the existence of an impact factor. The impact of paper should not depend on the impact factor of the journal that decided to publish it, but only on the quality and importance of the science it contains.


This AMA is being permanently archived by The Winnower, a publishing platform that offers traditional scholarly publishing tools to traditional and non-traditional scholarly outputs—because scholarly communication doesn’t just happen in journals.

To cite this AMA please use: https://doi.org/10.15200/winn.147394.43850

You can learn more and start contributing at thewinnower.com

redditWinnower

Thank you Winnower for archiving this AMA! Such a great example of applying good scholarly practices outside of scholarship!


Is this just useful for sciences, or for other disciplines as well?

HotKarl_Marx

We started out mostly to satisfy the needs of physicists and astronomers (our fields). Our first implementation only supported LaTeX. We then moved to a more intuitive interface, which could be used by all researchers, also outside of the hard sciences. We are improving it even more now and very soon we will be very much like Word or GDocs in terms of look and feel but support all the nice scholarly features we already support. So, in short, we are for other disciplines as well!


Do you envision a future where researchers are expected to publicly catalogue the entire experimental process prior to writing and submitting an article for peer review? Is it realistic to expect that the peer-reviewers will go through the process of reviewing raw data and verifying that everything was designed and calculated correctly? Do you plan to integrate with services such as the Open Science Framework to allow for deeper integration of raw data into publications?

shiruken

Good questions, Shiruken. I think researchers should include in their publications the data and code necessary to replicate the results of their research. The process of peer-reviewing can then be easily distributed, because other scholars trying to replicate the research (e.g. building on the results) would be able to verify if there are problems with the underlying science or not. We are friends with the Open Science Framework and we are currently discussing with them an integration between Authorea and OSF. By the way, we had Brian Nosek over at Authorea a few months ago and he gave a great talk on Open Science!


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.