Peer review at ScienceOpen is a little different to what you might be used to.
Does the fact that a paper has been published, and therefore peer reviewed, mean that it is flawless? Does it mean that the conversation around that research should stop? We do not think so. The only reason there would ever be no value in doing post-publication evaluation would be if all published work were completely infallible. Which is clearly not the case. This is, after all, why we continue to do research and build upon the work of those before us!
Therefore, we enable post-publication peer review across 34 million article records, as a form of final-version commenting. It can also be performed on preprints from the arXiv. These are essentially treated as open, pre-review manuscripts. Users can organise these into collections, and manage peer review entirely themselves as a community process.
We have now added a new feature that enables any of our users to invite another researcher to perform peer review on our platform. This is in the same way that an Editor does for a journal, as part of a fully transparent process – the theme for Peer Review Week this year! The difference to the traditional process of peer review is that this is more democratic as it is open to anyone.
All article pages now have an ‘Invite to Review’ button. Click it, and you have 2 options.
Search within the ScienceOpen userbase to see if the person you want to review already has a profile with us.
Add an email, or list of emails, of who you want to invite to review, if they don’t already have a ScienceOpen profile.
That’s it. It’s that easy. This combines the editorial management of peer review with open participation. We enable this to make sure that the process is fair, but efficient. This means that anyone within your research community can contribute to the research process, should they wish to.
‘Open research’ isn’t just about sharing resources like data, code, and papers, although this is a big part of it. One big, and often under-appreciated aspect of it is about making research accessible, inclusive, and participatory. A major principle driving this is leveraging transparency to bring processes and factors that are currently hidden into public view.
One area of research and scholarly communication where the debate is still very much ongoing for this is for peer review – our system of validation and gatekeeping to the vast archives of public knowledge.
OpenAIRE have released an important new survey and analysis on attitudes and experiences towards ‘Open Peer Review’ (OPR), based on more than 3000 respondents (full data available here to play with). This is important, as OPR is all about the principles above – making the process transparent, collaborative, inclusive, and in the end, better!
Below, we discuss some of the major findings of the survey, and how we at ScienceOpen fit into the bigger picture of Open Peer Review.
The future is Open
The main result of the survey is that the majority (60.3%) of respondents are in favour of OPR becoming a mainstream scholarly practice, particularly regarding open interaction, open reports and final-version commenting. Part of this is due to the relatively lower satisfaction scores reported, with just 56.4% of respondents being satisfied with traditional closed peer review, and 20.6% being dissatisfied – a much lower gap than all previous reports. From the survey, more than three quarters of respondents had previously engaged with OPR either as an author, reviewer, or editor. This suggests that OPR, in one form or another, is already probably more common practice than we might think.
Interestingly, this development is similar to what we saw with other aspects of ‘open science’ such as open access and open data – there is debate, experimentation, variable implementation, and finally they start to become accepted as the norm as policies, practices, and cultures adapt. The survey also showed that 88.2% of respondents were in favour of Open Access to publications, a much higher value than several years ago. It also found that support for OPR is correlated with support for Open Data and Open Access, which is perhaps not surprising, although conversations regarding OPR are still in their relative infancy.
This suggests that as debates around OPR mature, we are likely to see an increase in the uptake and support of it, as with other areas of ‘Open’. Indeed, the survey also found a difference in generational support for OPR, with younger generations favouring it more over more-established researchers. As it is these generations who will inherit and govern the system in the future, it is more likely to have the characteristics that they favour.
Recently, our colleagues at OpenAIRE have published a systematic review of ‘Open Peer Review’ (OPR). As part of this, they defined seven consistent traits of OPR, which we thought sounded like a remarkably good opportunity to help clarify how peer review works at ScienceOpen.
At ScienceOpen, we have over 31 million article records all available for public, post-publication peer review (PPPR), more than 3 million of which are full-text Open Access. This functionality is a response to increasing calls for continuous moderation of the published research literature, a consistent questioning of the functionality of the traditional peer review model (some examples in this post), and an increasing recognition that scientific discourse does not stop at the ‘event’ point of publication for any research article.
At ScienceOpen, we invite the whole scientific community to contribute to the review process, should they wish to. The only requirement is that the person has to be registered at ORCID and have at least five publications assigned to their ORCID account to write a review (Scientific Members and Experts). If you do not satisfy these requirements and wish to perform a peer review at ScienceOpen, please contact us and we will make an exception for you.
Users with at least one publication assigned to their ORCID account are able to comment on a paper (Members). Please refer to our User categories for further details.
The Pitt and Hill (2016) article was read and downloaded almost 100 times a day since its publication on ScienceOpen. More importantly, it now has 7 independent post-publication peer reviews and 5 comments. Although this is a single paper in ScienceOpen’s vast index of 28 million research articles (all open to post-publication peer review!), the story of how this article got so much attention is worth re-telling.
Prof. Stark runs a course on the theory and application of statistical models. In his course, groups of students replicate and critique the statistical analyses of published research articles using the article’s publicly available raw data. Obviously, for this course to work, Prof. Stark needs rigorous research articles and the raw data used in the article. In this sense, Pitt and Hill’s article on ScienceOpen was an ideal candidate..
The groups of students started their critical replication of the Hill and Pitt article in the Fall semester of 2016 and finished right before the new year. By getting students to actively engage with research, they gain the confidence and expertise to critically analyse published research.
The Post-Publication Peer Review function on ScienceOpen is usually only open to researchers with more than 5 published articles. This would have normally barred Stark’s groups from publishing their critical replications. However, upon hearing about his amazing initiative, ScienceOpen opened their review function to each of Prof. Stark’s vetted early career researchers. And importantly, since each peer review on ScienceOpen is assigned a CrossRef DOI along with a CC-BY license, after posting their reviews, each member of the group has officially shared their very own scientific publication.
All of the complete peer reviews from the groups of students can be found below. They all come with highly detailed statistical analyses of the research, and are thorough, constructive, and critical, as we expect an open peer review process to be.
Furthermore, unlike almost every other Post Publication Peer Review function out there, the peer reviews on ScienceOpen are integrated with graphics and plots. This awesome feature was added specifically for Prof. Stark’s course, but note that it is now available for any peer review on ScienceOpen.
I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.
This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.
Open peer review has many different aspects, and is not simply about removing anonymity from the process. Open peer review forms part of the ongoing evolution of an open research system, and the transformation of peer review into a more constructive and collaborative process. The ultimate goal of traditional peer review remains the same – to make sure that the work of authors gets published to an acceptable standard of scientific rigour.
There are different levels of bi-directional anonymity throughout the peer review process, including whether or not the referees know who the authors are but not vice versa (single blind review), or whether both parties remain anonymous to each other (double blind review). Open peer review is a relatively new phenomenon (initiated in 1999 by the BMJ) one aspect of which is that the authors and referees names are disclosed to each other. The foundation of open peer review is based on transparency to avoid competition or conflicts born out through the fact that those who are performing peer review will often be the closest competitors to the authors, as they will tend to be the most competent to assess the research.