Recently, our colleagues at OpenAIRE have published a systematic review of ‘Open Peer Review’ (OPR). As part of this, they defined seven consistent traits of OPR, which we thought sounded like a remarkably good opportunity to help clarify how peer review works at ScienceOpen.
At ScienceOpen, we have over 31 million article records all available for public, post-publication peer review (PPPR), more than 3 million of which are full-text Open Access. This functionality is a response to increasing calls for continuous moderation of the published research literature, a consistent questioning of the functionality of the traditional peer review model (some examples in this post), and an increasing recognition that scientific discourse does not stop at the ‘event’ point of publication for any research article.
At ScienceOpen, we invite the whole scientific community to contribute to the review process, should they wish to. The only requirement is that the person has to be registered at ORCID and have at least five publications assigned to their ORCID account to write a review (Scientific Members and Experts). If you do not satisfy these requirements and wish to perform a peer review at ScienceOpen, please contact us and we will make an exception for you.
Users with at least one publication assigned to their ORCID account are able to comment on a paper (Members). Please refer to our User categories for further details.
At ScienceOpen, we have been pushing for greater transparency in peer review since our inception. We inject transparency at multiple levels, by identifying referees, publishing reports, providing formal recognition for contributions, and encouraging open interaction on our platform (more details here).
This is why we’re extremely stoked to see the theme for this year’s Peer Review Week to be all around the theme of transparency!
In 2017, we are helping to organise a session at the Peer Review Congress to help showcase what peer review should look like when it works. We look forward to working with the other partner organisations and the global scholarly community in helping to make peer review a fairer, more transparent process.
A core concept for our evolving understanding of open research and scholarship is that of equity and fairness within the global research community. At ScienceOpen, this is something we strongly believe in, and work together with a range of publishers and researchers to play our part in making this a reality for research.
As part of our mission, we therefore try to break down barriers in research, and prefer to build bridges over walls. Here are just some examples of how we do this, and in doing so contribute to building a platform that acts as a social community space for all researchers.
One of the main features of ScienceOpen is that we are a research aggregator. We don’t select what we index based on discipline, publisher, or geography, as that just creates another silo. Enough of those exist already. What we need, and what we do, is to bring together research articles from across publishers and other platforms and into one space, where it is all treated in exactly the same way.
When you have articles displayed in this way, factors such as journal brands and impact factors play less importance than the actual content itself. Users can make their own choices about what to read, review, share, and re-use based on their own expertise and evaluation, or the social context provided by our other users.
Last year, SciELO integrated more than 500,000 Open Access articles with us from across Latin America, for the first time putting all of this research on the same level as that from research contained within PubMed Central. There is no reason why there should be geographical segregation of research across platforms. We believe that all research deserves to be read and re-used by anyone, irrespective of where that research was conducted and who published it.
Open Access isn’t just about access to knowledge, but also principles of equality, and to achieve that we have to recognize the value of research from around the world.
How can something exclusive, secretive, and irreproducible be considered to be objective? How can something exclusive, secretive, and irreproducible be considered as a ‘gold standard’ of any sort?
Traditional, closed peer review has these traits, but yet for some reason held in esteem as the most rigorous and objective standard of research and knowledge generation that we have. Peer review fails peer review, and its own test of integrity and validation, and is one of the greatest ironies of the academic world.
What we need is a new standard of peer review that is suitable for a Web-based world of scholarly communication. This is to help accommodate the increasingly rapid communication of research and new sources of information, and bring peer review out of the dark (literally) ages and into one which makes sense in a world of fast, open, digital knowledge dissemination.
What should a standard for peer review look like in 2017?
The big test for peer review, and any future version of it, is how does the scientific community apply its stamp of approval?
At ScienceOpen, we have over 28 million article records all available for public, post-publication peer review (PPPR), 3 million of which are full-text Open Access. This functionality is a response to increasing calls for continuous moderation of the published research literature, a consistent questioning of the functionality of the traditional peer review model (some examples in this post), and an increasing recognition that scientific discourse does not stop at the point of publication for any research article.
In spite of this increasing demand, the uptake of PPPR across different platforms seems to be relatively low overall. So what are some of the main reasons why researchers might feel less motivated to do PPPR, and is there anything we can do to increase its usage and adoption as part of a more open research culture?
What even is ‘post-publication’ peer review?
There is a general mentality among researchers that once research has been published, it has already ‘passed’ peer review, so why should it need to be peer reviewed again?
The Pitt and Hill (2016) article was read and downloaded almost 100 times a day since its publication on ScienceOpen. More importantly, it now has 7 independent post-publication peer reviews and 5 comments. Although this is a single paper in ScienceOpen’s vast index of 28 million research articles (all open to post-publication peer review!), the story of how this article got so much attention is worth re-telling.
Prof. Stark runs a course on the theory and application of statistical models. In his course, groups of students replicate and critique the statistical analyses of published research articles using the article’s publicly available raw data. Obviously, for this course to work, Prof. Stark needs rigorous research articles and the raw data used in the article. In this sense, Pitt and Hill’s article on ScienceOpen was an ideal candidate..
The groups of students started their critical replication of the Hill and Pitt article in the Fall semester of 2016 and finished right before the new year. By getting students to actively engage with research, they gain the confidence and expertise to critically analyse published research.
The Post-Publication Peer Review function on ScienceOpen is usually only open to researchers with more than 5 published articles. This would have normally barred Stark’s groups from publishing their critical replications. However, upon hearing about his amazing initiative, ScienceOpen opened their review function to each of Prof. Stark’s vetted early career researchers. And importantly, since each peer review on ScienceOpen is assigned a CrossRef DOI along with a CC-BY license, after posting their reviews, each member of the group has officially shared their very own scientific publication.
All of the complete peer reviews from the groups of students can be found below. They all come with highly detailed statistical analyses of the research, and are thorough, constructive, and critical, as we expect an open peer review process to be.
Furthermore, unlike almost every other Post Publication Peer Review function out there, the peer reviews on ScienceOpen are integrated with graphics and plots. This awesome feature was added specifically for Prof. Stark’s course, but note that it is now available for any peer review on ScienceOpen.
Kick off the new year with the new unified search on ScienceOpen! We have accomplished a lot over the last year and are looking forward to supporting the academic community in 2017.
In 2016 ScienceOpen brought you more context: Now your search comes with a new analytics bar that breaks down your search results by collections, journals, publishers, disciplines, and keywords for quicker filtering. Try a search for the pressing topics of 2016 like Zika or CRISPR and take the new features for a spin.
Researcher output, journal content, reference lists, citing articles can all be dynamically sorted and explored via Altmetric score, citations, date, activity. Statistics for journals, publishers and authors give overview of the content that we are indexing on ScienceOpen. Check out the most relevant journals on ScienceOpen, for example BMC Infectious Diseases or PloS Genetics for a new perspective. Or add your publications to your ORCID and get a dynamic view of your own output.
In 2016 ScienceOpen brought you more open: The ScienceOpen team participated in and helped organize numerous community events promoting Open Science. From Peer Review Week to OpenCon, talks at SSP in Vancouver and SpotOn in London, our team was on the road, debating hot issues in scholarly communication.
In order to bring more visibility to smaller community open access journals, very often with close to non-existent funding and run on a voluntary basis, we launched our platinum indexing competition. It was geared towards open access journals charging no APCs to their authors. Four successful rounds in, we have selected 18 journals to be indexed and awarded some of them with special featured collections on the ScienceOpen platform. This activity was particularly rewarding as we heard back from journals’ editors expressing their enthusiasm about the ScienceOpen project and enjoying bigger usage numbers on their content.
The ScienceOpen 2.017 version will continue to focus on context, content and open science. We are your starting point for academic discovery and networking. Together let’s explore new ways to support visibility for your publications, promote peer review, improve search and discovery and facilitate collection building. Here is to putting research in context! The year 2016 had some great moments – may 2017 bring many, many more!
For Peer Review Week 2016, we set a simple competition for you all, to publicly peer review one of 25 million research articles on our platform. This fitted perfectly with the theme this year of ‘Recognising Review’, as every single peer review conducted with us is published openly and creditable through the application of a CC BY license, which enables the unrestricted sharing and re-use of the reviews providing that attribution is given.
We’re happy to announce that Lauren Collister was the winner this year, and a t-shirt is on your way!
To honor and celebrate peer review, a group of organizations is working collaboratively to plan a week of activities and events. The group is delighted to announce that the second annual Peer Review Week will run from September 19- 25, 2016.
This year’s theme is Recognition for Review, exploring all aspects of how those participating in review activity – in publishing, grant review, conference submissions, promotion and tenure, and more – should be recognized for their contribution.
NOTE: OpenAIRE would like to know what you think about open peer review! Have your say here until 7th October!
Tl;dr – “Post-publication peer review” (PPPR) has gained a lot of traction in recent years. As with much of peer review’s confusing lexicon, however, this term is ambiguous. This ambiguity stems from confusion over what constitutes “publication” in the digital age. PPPR conflates two distinct phenomena, which we would do better to treat separately, namely “open pre-review manuscripts” and “open final-version commenting”.
What is “post-publication peer review”?
Peer review can have two senses, one specific and the other more general. “Peer Review” (henceforth PR) is a well-defined publishing practice for the quality assurance of research articles and other academic outputs. It is intimately tied to the publication process. It traditionally begins when an editor sends a manuscript to reviewers and ends when the editor accepts a manuscript for publication. But “peer review” (lower-case, henceforth “pr”) is just the critique and appraisal of ideas, theories, and findings by those with particular insight into a topic. Such feedback happens all the time. It happens before manuscripts are submitted: in colleagues’ initial reactions (positive or negative) to a new idea, feedback gained from conferences, lectures, seminars and late-night bull sessions, or private comments on late-stage first-draft manuscripts from trusted peers. And it continues after the article’s appearance in a journal, via a multitude of channels through which readers can give feedback, including comment sections on journal websites, dedicated channels for post-publication commentary, blogs and social media, and of course in future research that cites and comments back on the findings.