Hey Reddit folks! My name is Charley Trowbridge and I am the Director of Peer Review Operations at the ACS. Along with my group, which consists of 15 team members distributed around the country and the globe, I am responsible for the support and maintenance of the peer review system that the ACS uses for all of its journals and books, and for the administrative support of our ca. 500 worldwide editorial offices. We strive to ensure that submitted content receives swift and thorough review, and are constantly looking for ways to improve our processes and policies to make submitting to ACS journals as easy as possible, while maintaining the highest possible quality of review experience.
Recently we have also dedicated ourselves to developing the ACS Reviewer Labhttps://www.acsreviewerlab.org/, which is a free online interactive course that we have developed and launched to educate researchers on the principles of quality peer-review. Anyone can take the course, which takes about four hours to complete, in total. You can go through the six modules of the course at your own pace, and have 30 days to complete it.
Also, September 11-17 is Peer Review Week - follow the conversations via #PeerRevWk17 on Twitter.
I have been at the ACS for 11 years, and have been involved in the development and implementation of web-based peer review for about 16 years. Before coming to the ACS I worked for many of the major science publishers in a variety of roles and capacities, and I have been involved in scholarly publishing for the past 35 years overall. I have a BA in comparative literature, with a concentration in German. I lived and worked in Germany for two years.
Ask me anything about the peer review system and process at the ACS, about how we handle submissions, and about how ACS supports authors, reviewers, and editors.
I’ll be back at 11am EDT (8am PDT, 3pm UTC) to start answering your questions.
Logging in at 11am EDT.
Logging off at 12:31pm EDT.
According to a new report examining the 2% of peer-reviewed, published climate-change denying studies we're found to have significant flaws.
How can these reports make it through the peer-reviewed process without being caught?
Is there any kind of formal process within the scientific community to de-legitimize reports whose findings are flawed, even after they've been published?
There are formal processes to investigate and, when necessary or appropriate, to withdraw or retract research findings that are found to be clearly flawed. We try to prevent the need for that by running the most thorough peer review process possible, which involves being aware of any potential conflicts of interest on the part of either authors or reviewers, and ensuring that the scientific content submitted to us receives unbiased, objective evaluation.
I have a few questions, so feel free to address those which you find most compelling:
1) What role do you see for AI (specifically, machine learning) in the peer review process in the future?
2) How do you balance the need for a standardized process and platform on one hand with the specific eccentricities of each journal on the other? To expand a little, do you allow full customization of the peer review process per journal, or do you try to implement best practices (or both!)?
3) What Author Services are you excited about, moving forward? What does ACS offer that other publishers don't?
Thank you in advance!
Thanks for a really great set of questions :-).
AI/ML is now playing a small but significant role in certain areas of the editorial workflow. Right now it is mainly being used to help authors ID appropriate journals where they can submit their work, and, at least in some instances, to help editors with the search for appropriate reviewers. Right now those algorithms are good, but nowhere near perfect. Given the rapid pace of development, I can see those two algorithmic applications getting much better in the near future (say 1 - 2 years). Beyond that, I can also see them eventually helping authors in writing and improving the discoverability of their work (so, machine "reads" the published record and can point you up on important terminology and descriptors). Not sure about the predictive value algorithms yet as we are still in the early stages with them.
That is a tough question (the standardization vs. specialization one) and it is one that we constantly struggle with. Right now we are in the process of "stepping back" and looking at our whole portfolio to identify areas of overlap and to streamline and standardize our requirements to the greatest extent possible. We also like to differentiate between the requirements for an original or first submission, where we are mainly focused on just the scientific content of an article, and a submission where a revision has been requested and is therefore typically on a trajectory for eventual publication. As for your question about allowing customization of process and adhering to best practices, I have to say both ;-).
Right now we are actually most excited about the introduction of our ACS Reviewer Lab, which I mentioned in my intro. We see educating peer reviewers about best practices as a real need in the scholarly publishing space, and filling that need as part of our mission. We already provide English editing, formatting, and graphics services, and we look forward to expanding that list to perhaps include data checking services. We currently offer data checking as a standard part of review at individual journals but would like to make that more broadly available.
Edit done to correct typo in #2.
Thanks for coming to talk with us! It seems like there's a narrow band where people are senior enough to have enough experience to be good reviewers, but not so senior that they're too busy. How do you reach out to people on either side of that band?
Thanks for taking the time to read and post. You ask a good question, and the basic answer is that this is a constant struggle that requires a constant effort from our editors. As I have mentioned elsewhere here today, one way to try and address this issue is to constantly "mine" the influx of new authors as they come into the realm of publication. Another way, and one we are now trying, is through educational outreach efforts like the ACS Reviewer Lab. When we began that effort we were surprised to learn how little formal curriculum attention is given to peer review and we think giving people some real guidance about how to provide a good review might help to increase the pool of available good reviewers, whether they are newly "minted" or seasoned scientists.
I submitted an article to a journal and it's almost been 3 months since last they last contacted me. They said congrats your submission passed initial screening and now will be sent to the reviewers. It still says "under review" but no GE has been assigned. What should I do?
Sorry to hear about that. I think it would be fair at this point to reach out to the journal office and ask what's up. Three months is a long time, and you are fine to ask the journal for an update at this point.
What's the average age of a peer reviewer? What percent of the reviewers have published in ACS journals?
Hey there. Good question, but unfortunately not one that I exactly know the answer to. I reckon the average age of peer reviewers kind of matches the average age of people in a given field, but this is not data that we collect, so I can't give you a number. As for percentage of reviewers who publish with us, again, I can't give a hard number, but anecdotally I can say that it is probably about half or better in any given year. We get new authors every year, and so hence a new potential pool of reviewers. We do make deliberate attempts to invite new authors to participate in the peer review process, and I do know of instances where certain editors make a point of getting submitting authors to become reviewers.
What are some of the more challenging aspects of the peer review process?
Tough question, but thanks for asking it. There are many challenges to running a good peer review process (and there are many forms and varieties of peer review) but the main one as I see it personally is to ensure that every author gets thorough, constructive, and useful feedback about her/his work and that even if the review process does not result in publication, the submitting researcher benefits from the experience. Again, speaking personally, that is always the main challenge, and a journal's ability to provide that solid feedback is critical to its success, and what keeps us all moving forward.
- t3_6zmmde_comments.json 51.6 KB
This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.