I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.
Recently, Figshare also launched their pretty cool Collections feature, which is awesome in embracing the additional dimension of non-traditional research outputs with this concept. Figshare now joins ScienceOpen and Mendeley, among others, in recognising the value of thematic groups of digital objects, where the scope and content is defined by the research community, independent of journals and publishers.
ScienceOpen now has 175 Collections, each one representing a place to openly engage with research through peer review, discussion, sharing, and recommending. Each one is managed by a group of Editors or a single Editor, whose role is to assemble the Collection, curate it, and foster community engagement.
The value of this is twofold. Firstly, Editors create and manage a valuable resource for their communities, which anyone can openly contribute to. Secondly, this provides a platform to develop new skills for researchers: public peer review, community management, editorial control. Each of these is part of an essential and core skill-set for researchers.
If you would like to become a Collection Editor, simply shoot us an email at: Jon.Tennant@scienceopen.com, or tweet us at @science_open if that’s your preferred method (or just leave a comment here)! All it takes to become an Editor is your interest. We don’t exclude anyone, we just want to know who is building one so we can provide the best support possible!
We look forward to working with you and making science more open 🙂
The arXiv is a server that hosts ‘eprints’ or ‘preprints’ of research papers, and is a key publishing platform for many fields, particularly physics and mathematics. Founded back in 1991 by Paul Ginsparg, it currently hosts over 1 million research articles, with more than 8000 submissions per month!
Despite now being in the running for 25 years, the arXiv still represents one of the greatest technological innovations to utilise the Web for scholarly communication.
While the majority of the content submitted to the arXiv is subsequently also submitted to traditional journals for publication, there is still content which never goes beyond its confines. Irrespective of this, communities engaged with the arXiv still cite articles published there, whether or not they have been formally published in a journal elsewhere.
This is the whole purpose of the arXiv: to facilitate rapid peer-to-peer communication so that science accelerates faster. The fact that all articles are publicly available is incidental, and just happens to be a topic of major interest with the growing open access movement.
However, the arXiv is not peer reviewed in the formal sense. It is moderated, so that junk submissions can be removed, or manuscripts recategorised, but it lacks the additional layer of quality control of traditional peer review.
So while some might think this poses a risk, ask yourself this question: do you re-use articles critical to your research without making sure that you have checked and understand the research to a sufficient degree that you can appropriately cite it? Because that’s peer review, that is, and it applies irrespective of whether an article has already been peer reviewed or not.
The Zika virus is an international public health emergency, as declared early on in February by the World Health Organisation. As such, it is critical that the global research community help combat this threat as rapidly and efficiently as possible. This is a case when science can quite literally save lives.
Recently, an article on the host-vector ratio in the Zika virus was published on the arXiv, a platform for articles often called ‘preprints’. This means that the work has not yet been peer reviewed, and is also not available to comment on the arXiv itself due to functional constraints. The paper is stuck in the hidden, timeless limbo of peer review until its eventual emergence as a paper or ultimate rejection.
Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.
Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.
This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.
Open peer review has many different aspects, and is not simply about removing anonymity from the process. Open peer review forms part of the ongoing evolution of an open research system, and the transformation of peer review into a more constructive and collaborative process. The ultimate goal of traditional peer review remains the same – to make sure that the work of authors gets published to an acceptable standard of scientific rigour.
There are different levels of bi-directional anonymity throughout the peer review process, including whether or not the referees know who the authors are but not vice versa (single blind review), or whether both parties remain anonymous to each other (double blind review). Open peer review is a relatively new phenomenon (initiated in 1999 by the BMJ) one aspect of which is that the authors and referees names are disclosed to each other. The foundation of open peer review is based on transparency to avoid competition or conflicts born out through the fact that those who are performing peer review will often be the closest competitors to the authors, as they will tend to be the most competent to assess the research.
For the majority of scientists, peer review is seen as integral to, and a fundamental part of, their job as a researcher. To be invited to review a research article is perceived as a great honour due to its recognition of expertise, and forms part of the duty of a scientist to help progress research. However, the system is in a bit of a fix. With more and more being published every year and ever increasing demands on the time and funds of researchers, the ability to competently perform peer review is dwindling simply due to competition with other aspects of duty. Why, many researchers might ask, should they spend their valuable time reviewing others work for little to no recognition or reward, as is with the traditional model? Indeed, many publishers opine that the greatest value they add is through managing the peer review process, which in many cases is performed on a volunteer basis by academic Editors and referees, and estimated to cost around $1.9 billion in management per year. But who actually gets the recognition and credit for all of this work?
It’s not too hard to see that the practices of and attitudes towards ‘open science’ are evolving amidst an ongoing examination about what the modern scholarly system should look like. While we might be more familiar with the ongoing debate about how to best implement open access to research articles and to the data behind publications, discussions regarding the structure, management, and process of peer review are perhaps more nuanced, but arguably of equal or greater significance.
Peer review is of enormous importance for managing the content of the published scientific record and the careers of the scientists who produce it. It is perceived as the golden standard of scholarly publishing, and for many determines whether or not research can be viewed as scientifically valid. Accordingly, peer review is a vital component at the core of the process of research communication, with repercussions for the very structure of academia which largely operates through a publication-based reward and incentive system.