Blog
About

Category: Peer Review

Non-Anon Post-Pub Peer Review in action!

Image attribution: Stop and Go, Nana B Agyel, Flickr, CC BY
Image attribution: Stop and Go, Nana B Agyel, Flickr, CC BY

One of the trickiest parts about launching anything new, also true for PLOS ONE too back in the day (hard to believe now!), is that the best way to explain what you do is to show it in action. Since we only officially launched in May, we’ve been watching some interesting use-cases develop, by which we mean ScienceOpen articles with Non-Anonymous Post-Publication Peer Review (PPPR). Even though we publish with DOI in about a week, it’s taken a little longer for the reviewers to have their say (reviews also receive a DOI), but we’re finding that what they say is well worth reading.

These articles and their associated reviews reassure us that PPPR, which some feel is still pretty radical, is a nascent but potentially healthy way to improve the way we review research. They also start to show that PPPR can benefit all sorts of research. If it can work for less spectacular, negative or contradictory research, then perhaps it will shine for once in a lifetime findings (which are of course far more rare).

Example 1. Professor Hugo Ten Cate (et al), a member of our Editorial Board, from Maastricht University, Dept of Internal Medicine, Maastricht, The Netherlands, published an article entitled “The anti-coagulants ASIS or APC do not protect against renal ischemia/ reperfusion injury” with us. It has received two PPPR from relevant experts, one by Professor Nigel Mackman and the other by Professor Ton Lisman. What really helps to tell the story of this article, from the author’s perspective, is that Hugo has made a video in which he explains that the results of this paper were not spectacular, in fact they were mostly negative, but that doesn’t mean that the article shouldn’t be published (and other journals did not want to do that) because it balances out other papers that show positive outcomes. Naturally, we agree with him!

Example 2.  Assistant Professor Nitika Pant Pai (et al), a member of our Editorial Board at McGill University in the Department of Medicine and a Scientist at the MUHC Research Institute, published an article entitled “Head to head comparisons in performance of CD4 point-of-care assays: a Bayesian meta-analysis (2000–2013)” with us. It has received a detailed review from Dr Paul Drain, a Medicine Resident at Stanford University.  Again, the author made a video in which she enthusiastically explains her support for Open Access and the concept of PPPR.

Example 3. Daniel Graziotin, a PhD student in Computer Science at the Free University of Bozen-Bolzano, Italy, published an article entitled “Green open access in computer science – an exploratory study on author-based self-archiving awareness, practice, and inhibitors,” with us which is an exploratory study of the awareness/practice/inhibitors of self-archiving among authors in an Italian computer science faculty. It has received two reviews, the first from Professor Stephen Curry (on our Editorial Board) and the other from Dr Alexandros Koulouris. In this case, the author gave us an interview to explain the background to this initial piece of research.

Example 4. Professor Nikos Karamanos (et al), a member of our Editorial Board from the University of Patras in Greece, published an article entitled “EGF/EGFR signaling axis is a significant regulator of the proteasome expression and activity in colon cancer cells” with us. It has received two reviews, one from Prof Dr Liliana Schaefer and the other from Assistant Professor Satoshi Tanida. Again, the author gave us an interview in which he explains the background to his article and his feelings on OA.

What do these use-cases tell us? Mostly that its early days, so meaningful observations are perhaps premature! However, here are some thoughts:

  • The reviewers that are being invited to the scientific conversation are participating and broadening the debate
  • The reviews are respectfully delivered with a straightforward tone, even when critical (probably because they are Non-Anon)
  • It’s good to see papers from the medical community, arguably the quintessential OA use-case for researchers, patients, their families and friends
  • The reviewers are appropriately matched to the content, authors can suggest up to 10 and anyone with 5 or more publications on their ORCID iD can review any content on the platform
  • The authors are largely, but not exclusively, from our Editorial Boards (no surprises here since they are usually the first to support a new publishing venture and are more senior so are freer to experiment)
  • Reading Non-Anon PPPR is a new skill requiring balancing a scholars background with their reviews and comparing/contrasting them with those of the others
  • None of these authors have yet used Versioning to revise their original articles in the light of reviewer feedback received (although this article is now on version 2)

Anyways, we hope you enjoy watching how PPPR at ScienceOpen evolves as much as we do! Feel free to leave a comment on this post to continue the conversation.

In:  Peer Review  

Where did our Peer Review Mojo go?

Peer 1: Brilliant! Accept with no changes; Peer 2: Groundbreaking! Accept with no changes; Peer 3: Reject.  Credit: the brilliant @LegoAcademics
Peer 1: Brilliant! Accept with no changes; Peer 2: Groundbreaking! Accept with no changes; Peer 3: Reject. By the talented @LegoAcademics.

Many researchers agree that for all its faults, Peer Review is still the best mechanism available for the evaluation of research papers.

However, there are growing doubts that Pre-Publication Peer Review, single or double blinded, is the best way to get the job done. Fascinating background reading on this topic includes the Effect of Blinding and Unmasking on the Quality of Peer Review from the Journal of General Internal Medicine.

In a 2002 survey performed by the Association of Learned and Professional Society Publishers (ALPSP1), 45% of the respondents expected to see some Peer Review change in the next five years – for example, journals moving to Open Post-Publication Peer Review. Although the timing of their prediction was off, it is true that there is now growing interest in this field and a few practitioners.

Driven by the fact that more and more scholarly publications are launched every year, the concept of Peer Review has been criticized for consuming too much valuable time. Moreover, Pre-Publication Peer Review and selection does not protect against fraud or misconduct. Other questions that have been raised about Peer Review include:

  • What does it do for Science?
  • What does the scientific community want it to do?
  • Does it illuminate good ideas or shut them down?
  • Should reviewers remain anonymous?

In this post, I want to explore in more detail what motivates researchers to evaluate the previously submitted work of their peers. If we can better understand the reasons why researchers review, we can also discuss scenarios which may improve both the transparency and quality of that process.

Let’s first consider what could boost the motivation of a researcher to review an article. At present there are a myriad of excuses that most of us use to put off this extra work, which usually claims several hours of an already tight time budget. The scientific community does not know or record how many hours scientists spend on Peer Review. Their institutions do not acknowledge this huge time commitment when assigning new funding. This effort is not acknowledged in a closed Peer Review system because scientists do not receive any credit for their work, as they would for a citable publication and it carries no weight when applying for a new position. This is completely different from the rewards which flow from publishing new research, in particular if that research has been published in a high-branded traditional journal.

Therefore we should ask ourselves:

Is peer review broken as a system? Yes, but many believe it is required to maintain a certain level of quality control in academia. At the very least, Pre-Publication Peer Review is a concept recognized by the scientific community as supporting rigorous communication. More coverage of the flaws within the Peer Review system is provided in this post by The Scientist.

Why do we review? A systematic survey by Sense About Science on Peer Review in 2009 represented the views of more than 4,000 researchers in all disciplines. It found that the majority of researchers agreed that reviewing meant playing their part within academic community. Review requests were hard to decline given their sense of commitment to their peers, despite the fact that they didn’t believe they would gain personal recognition for their work. The second most common motivation was to enjoy helping others to improve their article. And finally, more than two thirds of all the participating researchers like seeing new work from their peers ahead of publication. I will keep this latter point in mind for discussion later.

What sort of reward would researchers like? Having understood the main reasons why researchers agree to review, the survey asked what would further incent them to undertake this task, possibly in a timelier manner! Interestingly, almost half of the researchers said that they would be more likely to review for a journal if they received a payment (41%) or a complimentary subscription (51%, in the the days before the spread of Open Access). Despite this result, only a vanishingly small minority of journals provides any kind of payment to its reviewers. This seems even more amazing in terms of the 35-40% profit margins which are common place in for-profit scholarly journal publishing.

Given that these publishers can afford to pay, why don’t they? One acceptable answer could be that they do not want to introduce new bias into the process. Another answer is that given the number of about 1.5-2 million articles being published every year in STM disciplines as reported by Bjork et al and an average rejection rate of 50% (a factor of 2 for total number of submitted manuscripts to be reviewed) and at least two reviewers involved per paper, it would cost publishers a tidy sum to pay each reviewer a reasonable amount of money to compensate them for their considerable time.

Are there other ways to provide reviewers with credit? Acknowledgement in the journal or a formal accreditation as for example CME/CPD points could improve their motivation said a still significant percentage of researchers. However only a minority would feel motivated by the idea that their report would be published with the paper.

Half of all scientists felt that they would be rather discouraged if their identity was disclosed to all readers of the article. The other half did not feel discouraged and expected higher quality from a more open evaluation process. These findings have been reported in a study by Van Rooyen et al. who found that 55% of the responding scientists were in favor of reviewers being identified, only 25% were against it. More importantly, the authors concluded that Open Peer Review leads to better quality reviews. One reason for this conclusion is quite obvious: if both the name and the comments are disclosed to the public, it appears to be only natural that a reviewer will spend at least as much, if not more, effort to make sure that the report is as good as a scientific paper. Another reason is that the reviewer is aware that a useful report could contribute to scientific discourse much more efficiently than a short statement with a few ticks in a standard reviewers’ form which only two people can access: the journal’s editor and most likely the author.  These reviewers’ comments in an open report can be read in principal by all researchers in that field and may help them to improve their own work.

In another study, which analyzed the effects of blinding on the quality of peer review, McNutt et al2 reported that reviewers who chose to sign their reviews were more constructive in their comments. In principle all new concepts which motivate reviewers to submit a review on a paper and which are not simply based on a cash-value incentive will require a disclosure of both the identity of the reviewer and the report.

Should we continue to Review? 15,000 researchers have asked this question and subsequently withdrew their services in this regard. One reason not to participate in the review process is to protest the monopoly power within the international publishing industry which led to the Elsevier boycott.  Coverage of this issue can be found in the New York Times and the Cost of Knowledge.

Having asked a range of different questions above, I’d like to move on and describe the different types of Peer Review.

Disclosure of a reviewer’s identity to the public is called Open Peer-Review. This simply means that either the names or the full report for a paper will be published with the paper itself, after the peer-review process has been completed. Open non-mandatory peer review has been established for example by PLOS Medicine and the PeerJ.

Let us now imagine a more open evaluation system for research which has been introduced as Post-Publication Peer Review (PPPR). I have previously discussed the ethics of this topic on my blog Science Publishing Laboratory. Like the current system of Pre-Publication evaluation, the new system relies on reviews and ratings. However, Post-Publication Peer Review differs in two crucial respects:

  1. Journal editors and reviewers do not decide whether or not a work will be published – as the articles are already published
  2. Reviews take the form of public communications to the community at large, not secret missives to editors and authors. Post-Publication Peer Review is, for example, used by F1000Research. In addition, Public Post-Publication Peer Review:
  3. Invites the scientific community to comment, review and rate a paper. The journal editor does not select the reviewers but it is instead it becomes a voluntary activity for those who feel interested and qualified to do so.
  4. Has no limitation as to the number of reviewers, unlike other Peer Review methodologies.
  5. Imposes no artificial time limit when the reviewing is “over”. Even years have gone by, researchers can evaluate the paper and write a review

New publishing platforms which have adopted Public Post-Publication Peer Review have been recently established by ScienceOpen and The Winnower (on the same day!).

The advantages of the open evaluation of research are readily observable.  As Kriegeskorte, a member of the Editorial Board of ScienceOpen, summarized in his article entitled Open Evaluation: A Vision for Entirely Transparent Post-Publication Peer Review and Rating for Science:

Public Post-Publication Peer Review makes peer review more similar to getting up to comment on a talk presented at a conference. Because the reviews are transparent and do not decide the fate of a publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where weak arguments can prevent publication because editors largely rely on reviewers’ judgments and the reviewers are not acting publicly before the eyes of the community. 4PR is a real discourse in science and the general research community benefit from it.

What incentives does an individual reviewer have when submitting a comment or review in a new open evaluation system?

  • Reviewers could be credited for their work, for example, how frequently they participated and to what extent they felt committed to playing their part as a member of the academic community. As we mentioned above, this has been a key motivation for the vast majority of researchers in terms of writing a review
  • Authors and Peers could comment on reviews to emphasize those which have been more useful for them than others. This establishes a rating not only for the paper itself but also for the comments and reviews which is a completely new concept in science. As a result reviewers get credited simply by the fact that peers acknowledge their work through (positive) feedback
  • Reviewers who contribute more frequent and constructive reviews than their colleagues within a certain area of expertise could be highlighted by a ranking. This ranking is a direct measure of the individual performance of a researcher which would be much more useful for evaluations of researchers compared to the Impact Factors of the journals in which they have published.
  • If a reviewer received direct feedback about the review, an open discussion could ensue which may lead to a more concentrated level of discourse, as, for example, during a conversation during a conference or a poster presentation.
  • And finally, if a reviewer decided to write a review, they are willing to read the paper because they are interested in the new research of one of their peers. And more importantly, they are free to decide when to submit the review. This straightforward situation is completely different to the present Pre-Publication Peer Review when a researcher is asked by an editor to 1) read and 2) review a new submission which they have never seen before.

Despite reports such as The Peer Review Process by the JISC Scholarly Communication Group that have indicated that the Peer Review process would evolve, new concepts have been introduced only rarely so far. Open Access and a transparent Open Peer-to-Peer evaluation are the prerequisites for a new peer review concept which provides more benefits for reviewers than the present review system in scholarly publishing.

With growing awareness of the dammage to the public perception of research caused by high profile retractions such as this reported in Nature, and an interesting recently observed correlation between a higher level of retraction and prestigious journals, it seems only logical that the momentum towards more transparency in research communication will grow.

Therefore we should support new ventures and publishing initiatives which have introduced principles of open evaluation and transparent reviewing. These new projects could help open our eyes to “an ideal world” as Timothy Gowers, Royal Society Mathematician and Fields Medalist summarized in his terrific vision to revolutionize scholarly communication and publishing. It will be interesting to find out how this will also improve the motivation of reviewers to do their important work of quality maintenance in academic publishing.

A note about the author:

Dr. Alexander Grossmann is a Physicist and Professor of Publishing Management at HTWK University of Applied Sciences, Leipzig (Germany). He has been working for more than 12 years in the management of publishing industry at major international companies before he co-founded ScienceOpen.

A note about future posts:

I wish to cover new services such as Academic Karma, Publons, Pub Peer and others in my next post, this one is already lengthy and I want to give their work due consideration.

References:

  1. (ALPSP) The Association of Learned and Professional Society Publishers (2002): Authors and Electronic Publishing. The ALPSP research study on authors’ and readers’ views on electronic research communication (ISBN 978-0-907341-23-9)
  2. (McNutt) McNutt RA et al: The effects of blinding on the quality of peer review. A randomized trial. JAMA. 1990 Mar 9;263 (10):1371-6

 

In:  Peer Review  

Peer Review 2.0

There is no doubt that peer review is one of the most crucial features of scientific publishing. A scientific discovery that is written down and then hidden in a secret place is not part of science. Only if it’s made public and judged by the scientific community it enters the next stage and may become part of the scientific record. Some of those discoveries will be immediately rejected and labeled as mere bullshit, others will be accepted as proper science, though simply ignored. Again others will be regarded as useful or even celebrated as scientific breakthrough. It even happens from time to time that long ignored discoveries experience a revival and suddenly become the focus of attention – years and decades after their initial publication.

We all know how peer review works. We have done it for centuries and it has become part of our scientific culture. We’ve learned from our supervisors how it’s done properly and we’ll teach it our own PhDs as soon as we are the supervisors. Interestingly, we rarely reflect on WHY we do things. So, what we need to ask ourselves is:

“Why did we make single-blind pre-publication peer review the gold standard?”

First of all, because it has been the best way to do peer review – at least in times when manuscripts and referee reports have been sent by mail, articles have been bundled issue-wise and distributed as printed journals to libraries worldwide. It simply didn’t make sense to review after the paper was already on the shelves; and it was completely reasonable to send manuscripts to peer review first and print only those articles that have passed this initial quality test. By the way, the second and even more important quality check is still done by the whole scientific community. Reproducibility, or better the lack of it, is a big issue in empirical sciences – although having a peer reviewing system in place.

The peer review process was managed by publishing houses. They knew the secret crafts called typesetting and printing and had taken the trouble to organize the global delivery of their product called scientific journal. The money for all these services was paid by the libraries in form of journal subscription fees. Publishing was hence (almost) free for authors. Space was precious and costly. In such a system it was even more important to pre-select for those articles that are “worth to be published”. With the beneficial side effect that it positively affected the most precious selling point, the Journal Impact Factor. So, only the “best” and “most interesting” papers where selected, “not so important” sections like Material and Methods and References where shortened, and “published” became synonymous with “peer reviewed”. For a deeper analysis of the major disadvantages of the IF see Alexander’s discussion “Journal Impact Factors – Time to say goodbye?” in this blog. Another less beneficial side effect of the label “published”: We all tend to perceive papers as something that is carved in stone.

In the early 1990s, a revolution named World Wide Web started to become reality at CERN. It had the potential to change the world forever – and it truly fulfilled this promise. I think it is legitimate to say that human history can be divided into the time before and after the internet. Meanwhile, information can be stored and distributed in digital form: fast, easy, cheap and with almost no limitation. This led to a paradigm shift in scientific publishing – or as Clay Shirky puts it:

“[…] the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.”

Nevertheless, we still do peer review as we did hundred years before. Why not using the advantages of the internet when judging scientific literature? Why do we still let a handful of editors preselect papers, having the journal impact factor in mind when deciding? Why not making it public first and let the scientific community judge afterwards? Why not assessing the “impact” of each article separately instead of referring to the prestige of the journal and using the average number of citations of all articles as measure? Why not making the identity and comments of each reviewer public? Why not letting readers benefit from the reviewer’s thorough analyses and let the readers decide which information they regard as useful?

In the end the reader will have to judge the paper anyway. I think it would be best if the reader had as much information available as possible. Not as a must, but as an option. If you are content with the information “has undergone peer review”, fine. I personally would like to know: How many reviewers? Who are they? Which were the positive and which were the negative points? By no means does this information supersede my own judgment. It simply helps me to assess the quality of the review process, points me at relevant details and enables me to preselect papers by my own criteria. Nothing argues against a pool of papers of heterogeneous quality, as long as I’m able to select in a suitable way:

“Please show me only those articles with at least two positive reviews and an average rating of 80%”

And even better, reviews can now be attributed to a person. It means that you can start building up a reputation as good reviewer – in addition to being a good researcher. Furthermore, I personally would think twice before singing a review and would make sure that I have done a proper job. This does NOT mean that anonymous reviews are of lower quality. Far from it! Hundreds of thousands of conscientious reviewers are working behind closed doors to keep the system running! I simply think it’s time to reward them for their important honorable duty.

No system is perfect and each has advantages and disadvantages. The system of Public Post-Publication Peer Review we offer on our platform ScienceOpen is a step in the right direction – at least in my eyes. I cordially invite everyone to help us further improving it and try to shape it into a system that benefits everyone.

Next page  
1234