NOTE: OpenAIRE would like to know what you think about open peer review! Have your say here until 7th October!
Tl;dr – “Post-publication peer review” (PPPR) has gained a lot of traction in recent years. As with much of peer review’s confusing lexicon, however, this term is ambiguous. This ambiguity stems from confusion over what constitutes “publication” in the digital age. PPPR conflates two distinct phenomena, which we would do better to treat separately, namely “open pre-review manuscripts” and “open final-version commenting”.
What is “post-publication peer review”?
Peer review can have two senses, one specific and the other more general. “Peer Review” (henceforth PR) is a well-defined publishing practice for the quality assurance of research articles and other academic outputs. It is intimately tied to the publication process. It traditionally begins when an editor sends a manuscript to reviewers and ends when the editor accepts a manuscript for publication. But “peer review” (lower-case, henceforth “pr”) is just the critique and appraisal of ideas, theories, and findings by those with particular insight into a topic. Such feedback happens all the time. It happens before manuscripts are submitted: in colleagues’ initial reactions (positive or negative) to a new idea, feedback gained from conferences, lectures, seminars and late-night bull sessions, or private comments on late-stage first-draft manuscripts from trusted peers. And it continues after the article’s appearance in a journal, via a multitude of channels through which readers can give feedback, including comment sections on journal websites, dedicated channels for post-publication commentary, blogs and social media, and of course in future research that cites and comments back on the findings.
A major difference between pr and PR is therefore temporal: the former happens all the time (or at least from the initial formation of ideas, theories and hypotheses), while the latter is a well-defined process that occurs precisely in that time between manuscript submission and article publication.
Post-publication peer review tries to bridge this disconnect, to incorporate elements of pr back into PR in order to open research up to wider scrutiny and serendipity, by messing with this standard temporal order of classical peer review (submission, review, publication). But the term is used to signify two distinct ways in which this can be done, namely what I’ll here call “open pre-review manuscripts” (opening manuscripts to the public before or in synchrony with peer review) and “open final-version commenting” (ongoing evaluation of “final” version-of-record publications). The conflation is not helpful, since the two phenomena actually have quite distinct purposes.
PPPR Type 1: Open pre-review manuscripts
Subject specific “preprint servers” like arXiv.org and bioRxiv.org, institutional repositories, catch-all repositories like Zenodo.org or Figshare.com and some publisher-hosted repositories (like PeerJ Preprints) allow authors to short-cut the traditional publication process and make their manuscripts immediately available to everyone. This can be used as a complement to a more traditional publication process, with comments invited on preprints and then incorporated into redrafting as the manuscript goes through traditional peer review with a journal. Alternatively, services which overlay peer-review functionalities on repositories can produce functional publication platforms at reduced cost (Boldt, 2011; Perakakis et al., 2010). The mathematics journal Discrete Analysis, for example, is an overlay journal whose primary content is hosted on the arXiv (Gowers, 2015). The recently released Open Peer Review Module for repositories, developed by Open Scholar in association with OpenAIRE, is an open source software plug-in which adds overlay peer review functionalities to repositories using the DSpace software (Open Scholar, 2016). Another innovative model along these lines is that of Science Open, which ingests articles metadata from preprint servers, contextualizes them by adding altmetrics and other relational information, before offering authors peer review.
In other cases manuscripts are submitted to publishers in the usual way, but made immediately available online (usually following some rapid preliminary review or “sanity check”) before the start of the peer review process. This approach was pioneered with the 1997 launch of the online journal Electronic Transactions in Artificial Intelligence (ETAI), where a two-stage review process was used. First, manuscripts were made available online for interactive community discussion, before later being subject to standard anonymous peer review. The journal stopped publishing in 2002 (Sandewall, 2012). Atmospheric Chemistry and Physics uses a similar system of multi-stage peer review, with manuscripts being made immediately available as “discussion papers” for community comments and peer review (Pöschl, 2012). Other prominent examples are F1000Research and the Semantic Web Journal .
The benefits to be gained from open pre-review manuscripts is that researchers can assert their priority in reporting findings – they needn’t wait for the sometimes seemingly endless peer review and publishing process, during which they live in constant fear of being scooped. Moreover, getting research out earlier increases its visibility, enables open participation in peer review (where commentary is open to all), and perhaps even, according to (Pöschl, 2012), increases the quality of initial manuscript submissions.
PPPR Type 2: Open final-version commenting
If the purpose of peer review is to assist in the selection and improvement of manuscripts for publication, then it seems illogical to suggest that peer review can continue once the final version-of-record is made public. Nonetheless, in a literal sense, even the declared fixed version-of-record continues to undergo a process of improvement (occasionally) and selection (perpetually).
As with most areas of communication, the Internet has hugely expanded the range of effective action available for readers to offer their feedback on scholarly works. Where before only formal routes like the letters to the journal or commentary articles offered readers a voice, now a multitude of channels exist. Journals are increasingly offering their own commentary sections. Walker and Rocha da Silva found that of 53 publishing venues reviewed, 24 provided facilities to enable user-comments on published articles – although these were typically not heavily used (Walker and Rocha da Silva, 2015). But users can “publish” their thoughts anywhere on the Web – via academic social networks like Mendeley, ResearchGate andAcademia.edu, via Twitter, or on their own blogs. The reputation of a work hence undergoes continuous evolution as long as it remains the subject of discussion.
Improvements based on feedback happen most obviously in the case of so-called ‘living’ publications like the Living Reviews group of three disciplinary journals in the fields of relativity, solar physics and computational astrophysics, publishing invited review articles which allow authors to regularly update their articles to incorporate the latest developments in the field. But even where the published version is anticipated to be the final version, it remains open to future retraction or correction. These days such changes are fuelled by social media, as in the 2010 case of #arseniclife, where social media critique over flaws in the methodology of a paper claiming to show a bacterium capable of growing on arsenic resulted in refutations being published in Science. The Retraction Watch blog is dedicated to publicising such cases.
A major influence here has been the independent platform Pubpeer, which is proud to offer itself as a “post-publication peer review platform”. When its users swarmed to critique a Nature paper on STAP (Stimulus-Triggered Acqusition of Pluripotency) cells, PubPeer argued that its “post-publication peer review easily outperformed even the most careful reviewing in the best journal. The papers’ comment threads on PubPeer have attracted some 40000 viewers. It’s hardly suprising [sic] they caught issues that three overworked referees and a couple of editors did not. Science is now able to self-correct instantly. Post-publication peer review is here to stay” (PubPeer, 2014).
We need better words …
Open pre-review manuscripts and open final-version commenting are distinct phenomena. Open pre-review manuscripts help establish priority and get findings out early. Open final-version commenting, on the other hand, helps filter published articles via peer to peer recommendation and acts as a final watchdog to ensure that false findings can be corrected.
That the two get bundled together under the term PPPR is probably an artefact of confusion about the meaning of the legacy term “publishing”. When is a digital manuscript “published”? Is it just when it is first made “public”, including in that still malleable form that we call “pre-prints”? Or is “publication” that moment when a (presumed) “final” version of record appears, after peer-review, copy-editing and the rubber-stamp of the journal brand have been applied?
As Cameron Neylon has said in this context, #weneedbetterwordsforthis. That publishing is a process and not an event is, I think, well understood. But perhaps it is now time for more serious recognition, also in the words we use, that “publications” are also processes. On this latter view, the term “post-publication peer review” is a misnomer not just because it is ambiguous in conflating two distinct phenomena, but rather because it really makes no sense to talk about either post- or pre-publication in such fixed terms. What we call “publications” are not fixed artefacts, but are always in a continual state of becoming, as initial ideas bloom and research takes shape, as arguments are sharpened and honed, and research outputs find their place in the scientific conversation of mankind. The legacy language of the analogue print world seems insufficient in this regard. Which words would be better? That’s another conversation (and this post is already long enough!), but as a start I’d argue in favour of ones which talk of growth, maturity and stability …
Boldt, A., 2011. Extending ArXiv.org to achieve open peer review and publishing. J. Sch. Publ. 42, 238–242.
Gowers, T., 2015. Discrete Analysis — an arXiv overlay journal. Gowerss Weblog.
Open Scholar, 2016. Open access repositories start to offer overlay peer review services [WWW Document]. Open Sch. CIC. URL http://www.openscholar.org.uk/institutional-repositories-start-to-offer-peer-review-services/ (accessed 8.25.16).
Perakakis, P., Taylor, M., Mazza, M., Trachana, V., 2010.Natural selection of academic papers.Scientometrics 85, 553–559. doi:10.1007/s11192-010-0253-1
Pöschl, U., 2012. Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Front. Comput.Neurosci.6, 33. doi:10.3389/fncom.2012.00033
PubPeer, 2014. Science self-corrects – instantly. PubPeer Online J. Club.
Sandewall, E., 2012. Maintaining Live Discussion in Two-Stage Open Peer Review. Front. Comput.Neurosci. 6. doi:10.3389/fncom.2012.00009
Walker, R., Rocha da Silva, P., 2015. Emerging trends in peer review – a survey. Front. Neurosci. 9. doi:10.3389/fnins.2015.00169