There are many many amazing blogs and bloggers out there that provide critical comments, context, and feedback on the ‘formally published’ research literature. One problem with these though is that they are often divorced from the papers themselves, perhaps lost on obscure websites, or not hitting the right target audience. This seems like an awful waste, don’t you think?
While some great initiatives such as The Winnower will now publish blog posts openly, these still are not connected to the papers that they are based on, if they are indeed written about particular papers. But what do researchers think about blogging as a form of scholarly communication in the form of post-publication peer review?
So as with most of my ponderings, I took to Twitter to get some feedback with a little poll. I actually framed the question a little ambiguously, but this shouldn’t sufficiently skew the data in any direction (I hope).
Do you consider blogging to be a form of post-publication peer review?
What is interesting to me is that 41% of people who answered, who undoubtedly did not constitute just a researcher sample, do not consider blogging to ‘count’ as peer review. I would really love to know why this is the case for some people. Perhaps they haven’t seen good examples, or perhaps just because it’s not formalised in any way, and quite disassociated from the research literature.
But did you know that anyone can review any article they want on ScienceOpen, and not just those from ScienceOpen Research? And perhaps more importantly, anyone can invite anyone else to review any article? That sounds an awful lot like the daytime job for Editors at traditional journals.. But with the power firmly in the hand of researchers and their communities. How cool is that?
It’s super easy to implement too. All you have to do is go to an article of choice, click the ‘Reviews’ button (Step 1), and then select the ‘Invite to Review’ button (Step 2). If you were feeling inclined, you could review the paper yourself too!
You can then simply select their ScienceOpen username (what, you don’t have one yet?!), or invite them by email (Step 3).
I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.
The arXiv is a server that hosts ‘eprints’ or ‘preprints’ of research papers, and is a key publishing platform for many fields, particularly physics and mathematics. Founded back in 1991 by Paul Ginsparg, it currently hosts over 1 million research articles, with more than 8000 submissions per month!
Despite now being in the running for 25 years, the arXiv still represents one of the greatest technological innovations to utilise the Web for scholarly communication.
While the majority of the content submitted to the arXiv is subsequently also submitted to traditional journals for publication, there is still content which never goes beyond its confines. Irrespective of this, communities engaged with the arXiv still cite articles published there, whether or not they have been formally published in a journal elsewhere.
This is the whole purpose of the arXiv: to facilitate rapid peer-to-peer communication so that science accelerates faster. The fact that all articles are publicly available is incidental, and just happens to be a topic of major interest with the growing open access movement.
However, the arXiv is not peer reviewed in the formal sense. It is moderated, so that junk submissions can be removed, or manuscripts recategorised, but it lacks the additional layer of quality control of traditional peer review.
So while some might think this poses a risk, ask yourself this question: do you re-use articles critical to your research without making sure that you have checked and understand the research to a sufficient degree that you can appropriately cite it? Because that’s peer review, that is, and it applies irrespective of whether an article has already been peer reviewed or not.
Doing peer review is tough. Building a Collection is tough. Both are also time consuming, and academics are like the White Rabbit from Alice in Wonderland: never enough time!
So while the benefits of open peer review and building Collection need to be considered in the ‘temporal trade off’ world of research, what are some other things researchers can do to help advance open science with us?
Here’s a simple list of 10 things that take anything from a few seconds to a few minutes!
Rate an article. You don’t have to do a full peer review, but can simply provide a rating. Come back later and provide a full review!
Recommend an article. Click, done. Interested researchers can see which articles are more highly recommended by the community.
Share an article. Use social media? Share on Facebook, Twitter, Google+, email, or further on ScienceOpen.
Comment on an article. Members with one item in their ORCID accounts can comment on any article.
Follow a Collection. See a Collection you like (like this?) Click, ‘Follow’, done.
Comment on a Collection. Like with all our articles, all Collection articles can be commented on, shared, recommended and peer reviewed.
Become a ScienceOpen member. It’s not needed for many of the functions on our platform, but does mean you can engage with the existing community and content more. Register here!
Have you replicated someone’s results? Let them know that in a comment!
Think someone’s methods are really great? Let them know in a comment!
Did someone not cite your work when they should have? Let them know in a comment!
All articles can be commented on. All you need to have is a membership, and an ORCID account with just one item. Easy! Commenting can be as short and sweet or long as you like. But sometimes a comment can be worth a lot of researchers and communities, just in terms of offering new thoughts, perspectives, or validation. Also, comments are great ways for junior researchers to engage with existing research communities.
The Zika virus is an international public health emergency, as declared early on in February by the World Health Organisation. As such, it is critical that the global research community help combat this threat as rapidly and efficiently as possible. This is a case when science can quite literally save lives.
Recently, an article on the host-vector ratio in the Zika virus was published on the arXiv, a platform for articles often called ‘preprints’. This means that the work has not yet been peer reviewed, and is also not available to comment on the arXiv itself due to functional constraints. The paper is stuck in the hidden, timeless limbo of peer review until its eventual emergence as a paper or ultimate rejection.
Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.
Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.
This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.
Open peer review has many different aspects, and is not simply about removing anonymity from the process. Open peer review forms part of the ongoing evolution of an open research system, and the transformation of peer review into a more constructive and collaborative process. The ultimate goal of traditional peer review remains the same – to make sure that the work of authors gets published to an acceptable standard of scientific rigour.
There are different levels of bi-directional anonymity throughout the peer review process, including whether or not the referees know who the authors are but not vice versa (single blind review), or whether both parties remain anonymous to each other (double blind review). Open peer review is a relatively new phenomenon (initiated in 1999 by the BMJ) one aspect of which is that the authors and referees names are disclosed to each other. The foundation of open peer review is based on transparency to avoid competition or conflicts born out through the fact that those who are performing peer review will often be the closest competitors to the authors, as they will tend to be the most competent to assess the research.