There are many many amazing blogs and bloggers out there that provide critical comments, context, and feedback on the ‘formally published’ research literature. One problem with these though is that they are often divorced from the papers themselves, perhaps lost on obscure websites, or not hitting the right target audience. This seems like an awful waste, don’t you think?
While some great initiatives such as The Winnower will now publish blog posts openly, these still are not connected to the papers that they are based on, if they are indeed written about particular papers. But what do researchers think about blogging as a form of scholarly communication in the form of post-publication peer review?
So as with most of my ponderings, I took to Twitter to get some feedback with a little poll. I actually framed the question a little ambiguously, but this shouldn’t sufficiently skew the data in any direction (I hope).
Do you consider blogging to be a form of post-publication peer review?
What is interesting to me is that 41% of people who answered, who undoubtedly did not constitute just a researcher sample, do not consider blogging to ‘count’ as peer review. I would really love to know why this is the case for some people. Perhaps they haven’t seen good examples, or perhaps just because it’s not formalised in any way, and quite disassociated from the research literature.
At ScienceOpen, you can peer review any of 13 million research articles (and climbing every day!). That’s right! Any one you want. Even if an article has been published and ‘passed’ peer review, you can still comment on it. The only reason there would ever be no value in doing this would be if all published work were completely infallible, which is clearly not the case.
The ScienceOpen community has agreed to only allow formal peer reviews from ScienceOpen members that have published at least 5 articles in peer reviewed journals. For this reason, please do not forget to add your publication history on ORCID. Only if you do not have an ORCID account can a peer review manager at ScienceOpen set up an account for you. If you would like for someone to do this, please send an email to firstname.lastname@example.org before proceeding. Put the phrase “Create accounts” in the subject line, and put your name and email address in the body of the email.
But did you know that anyone can review any article they want on ScienceOpen, and not just those from ScienceOpen Research? And perhaps more importantly, anyone can invite anyone else to review any article? That sounds an awful lot like the daytime job for Editors at traditional journals.. But with the power firmly in the hand of researchers and their communities. How cool is that?
It’s super easy to implement too. All you have to do is go to an article of choice, click the ‘Reviews’ button (Step 1), and then select the ‘Invite to Review’ button (Step 2). If you were feeling inclined, you could review the paper yourself too!
You can then simply select their ScienceOpen username (what, you don’t have one yet?!), or invite them by email (Step 3).
I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.
Recently, Figshare also launched their pretty cool Collections feature, which is awesome in embracing the additional dimension of non-traditional research outputs with this concept. Figshare now joins ScienceOpen and Mendeley, among others, in recognising the value of thematic groups of digital objects, where the scope and content is defined by the research community, independent of journals and publishers.
ScienceOpen now has 175 Collections, each one representing a place to openly engage with research through peer review, discussion, sharing, and recommending. Each one is managed by a group of Editors or a single Editor, whose role is to assemble the Collection, curate it, and foster community engagement.
The value of this is twofold. Firstly, Editors create and manage a valuable resource for their communities, which anyone can openly contribute to. Secondly, this provides a platform to develop new skills for researchers: public peer review, community management, editorial control. Each of these is part of an essential and core skill-set for researchers.
If you would like to become a Collection Editor, simply shoot us an email at: Jon.Tennant@scienceopen.com, or tweet us at @science_open if that’s your preferred method (or just leave a comment here)! All it takes to become an Editor is your interest. We don’t exclude anyone, we just want to know who is building one so we can provide the best support possible!
We look forward to working with you and making science more open 🙂
The arXiv is a server that hosts ‘eprints’ or ‘preprints’ of research papers, and is a key publishing platform for many fields, particularly physics and mathematics. Founded back in 1991 by Paul Ginsparg, it currently hosts over 1 million research articles, with more than 8000 submissions per month!
Despite now being in the running for 25 years, the arXiv still represents one of the greatest technological innovations to utilise the Web for scholarly communication.
While the majority of the content submitted to the arXiv is subsequently also submitted to traditional journals for publication, there is still content which never goes beyond its confines. Irrespective of this, communities engaged with the arXiv still cite articles published there, whether or not they have been formally published in a journal elsewhere.
This is the whole purpose of the arXiv: to facilitate rapid peer-to-peer communication so that science accelerates faster. The fact that all articles are publicly available is incidental, and just happens to be a topic of major interest with the growing open access movement.
However, the arXiv is not peer reviewed in the formal sense. It is moderated, so that junk submissions can be removed, or manuscripts recategorised, but it lacks the additional layer of quality control of traditional peer review.
So while some might think this poses a risk, ask yourself this question: do you re-use articles critical to your research without making sure that you have checked and understand the research to a sufficient degree that you can appropriately cite it? Because that’s peer review, that is, and it applies irrespective of whether an article has already been peer reviewed or not.
The Zika virus is an international public health emergency, as declared early on in February by the World Health Organisation. As such, it is critical that the global research community help combat this threat as rapidly and efficiently as possible. This is a case when science can quite literally save lives.
Recently, an article on the host-vector ratio in the Zika virus was published on the arXiv, a platform for articles often called ‘preprints’. This means that the work has not yet been peer reviewed, and is also not available to comment on the arXiv itself due to functional constraints. The paper is stuck in the hidden, timeless limbo of peer review until its eventual emergence as a paper or ultimate rejection.
Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.
Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.
This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.