We talk a lot about peer review in the scholarly communications world. Many of us – and our organizations – are working to improve both the process and the experience for researchers, which has led to a significant increase in the range of options available, especially – but not exclusively – for reviewing journal articles. From double blind to completely open review, pre- and/or post-publication, and even transferrable peer review, not to mention the work being done on peer review recognition and validation by organizations like Publons and PRE, there’s a plethora of new approaches and services to choose from.
But what do researchers make of all this? What are their experiences of peer review? How and why do they review themselves, and what do they get from reviews of their own work? In this reflection from researchers around the world, we asked some of them to tell us about their views of peer review.
By and large, their feedback was very positive, with good experiences outweighing bad and universal agreement that peer review is, as Elizabeth Briody of Cultural Keys, USA, says: “a critically important process for evaluating the merit, content, relevance, and usefulness of scholarly publications” – or as Hugh Jarvis, Cybrarian, University at Buffalo, USA, describes it: “Peer review is the glue of academic publishing.” Saurabh Sinha, Executive Dean, Faculty of Engineering & the Built Environment, University of Johannesburg, South Africa agrees that: “it positions our work with respect to the body of already published knowledge. The approach also helps to ensure, as far as possible, the correctness of the work, elimination of potential blind spots, and validity of assumptions for a practical world.”
Pretty much everyone noted the importance of peer review – both as reviewer and author – to them personally as well as professionally. For example, Professor Yongcheng Hu, a medical researcher in China commented that: “Peer review is an essential arbiter of scientific quality, no doubt, it has a great impact on scientific communication and is of great value in determining academic papers’ suitability for publication, while for me, via personal experience, it is also an process of exploration and sublimation.” Erik Ingelson, Professor of Molecular Epidemiology at Uppsala University in Sweden, currently Visiting Professor at Stanford University, USA adds: “Mostly, my experiences of being a reviewer have been positive; I get to think critically about study design and methods and learn new things on the way. Similarly, most of the time the review process is positive also as the author, since you get valuable input and the paper that comes out is often better than the original submission.” Anna Cupani, a Belgian researcher, agrees: “Having someone reading and commenting on your research is beneficial for several reasons: it validates your work, it confirms what you are doing is meaningful not only for you but for a wider scientific audience and it helps you focus and improve your research. You never grasp the meaning of something as deeply as when you have to explain it to someone else!” And Lee Pooi See, Associate Chair (Research), School of Materials Science and Engineering, Nanyang Technological University, Singapore adds: “My personal experience of being reviewed has been interesting; especially in receiving scientific viewpoints from different reviewers on emerging topics. Peer review also steers us to identify those unaddressed aspects of the related research topics.
Several people also commented that there are upsides and downsides to peer review. Janine Milbradt, who is currently working on her PhD at the Institute for Human Genetics, University of Cologne, Germany, says: “You never know what is going to happen! All you can be sure about is that you will have to put another 3-6 months of work into your paper. Having a paper reviewed is a nerve-stretching process, filled with hopes and dreams about the reviewers actually liking your research. On a more serious note, the review process is a very important tool to find incomprehensible or knowledge lacking parts of your research to improve your paper.” Professor Wong Limsoon, KITHCT Professor of Computer Science, National University of Singapore comments: “I appreciate very much constructive reviews that gave me really useful suggestions on my work. I am sometimes annoyed by uninformed comments, but fortunately these are few.”
So what improvements to peer review would our group of researchers like to see? To quote Professor Sinha again: “Scholarly peer-review has…the opportunity to improve beyond the past, where today, coupled with data, crowd-sourced reviews/discussion, newer open-access technologies could play a dynamic role of developing credibility of research-work and at the same time increasing competition!” Hugh Jarvis likewise has “great hopes that peer review will develop a much more expanded role in the future, and provide input before and after publication, similar to the role the comments serve in Current Anthropology and the product ratings in sites like Amazon.com.” And Joao Bosco Pesquero, Professor, Federal University of Sao Paulo, Brazil would also like to see a more open approach: “The more openly we produce science and expose our work to criticism, the more it helps to improve what we do.”
Perhaps the best summary of why researchers continue to value peer review – both as authors and as reviewers – comes from PhD student, Grace Pold of UMass – Amherst, USA, who told us: “Although I have had the opportunity to formally review only four or five papers, reviewing papers is one of my favorite things to do. First off, it is a good reminder that not all papers are born perfect, and when I am struggling to try and finish my own work and the prospect of a well-polished manuscript seems too far in the distance, it gives me hope. Second, is there a better opportunity to see what your colleagues are working on and thinking about than by reviewing their work? Third, the idea of being able to help shape the information released into the public sphere is a very enticing. Fourth, it is a great excuse to really think about the assumptions you and others make in your research…when you review, it is your responsibility to stop and think about why this is the way things are done. Fifth, thinking up alternative interpretations and then filtering through the data presented in the paper to determine the robustness of the conclusions is a rewarding challenge. Finally, reviewing papers provides an opportunity to slow-down and formulate a full, well-rounded opinion on something, something which happens unfortunately rarely in the life of the frantic modern scientist stuck in with the nitty gritty details of doing experiments. And I think that from a personal perspective, that final point of generating a sense of accomplishment in doing a good job in thinking things through to the end is probably the greatest motivation for me to review papers.”
Imagine if you will a perfect world where all knowledge is openly available to use and share without restriction. This might seem like a bit of a stretch most days but bear with me here!
Believe that the content narrative continues to move beyond the confines of today’s mainly static article. That an ongoing stream of results, data, figures and ideas flows for transparent review and discussion. In short, that a reductionist approach to scientific communication prevails which renders journals with their slow publication cycles and impact factors obsolete.
It’s not that hard to see the evidence of these trends already. Think about the rise of blogs and social media as suitable places for scientific discussion, the growing importance of continuous publication, data sharing and interactive figures. All this in the pursuit of making research and researchers themselves more visible, as they deserve to be.
This Peer Review Week, ScienceOpen wants to pose a simple question. As the number of research outputs grow and diversify (data sets, negative results, case reports, preprints, posters…) is the research community going to be able to peer-review all these objects prior to publication?
We think not. There isn’t enough time in the day, money to pay for it or even appetite for doing this now. Will these outputs be useful none-the-less? Absolutely, if we have a powerful way to find and filter them based on parameters readers find helpful and authors find rewarding. For example:
What do my peers think of this information?
Are there any updates to it?
What impact did it make in the world and who noticed?
Which work is worth highlighting in a specific field?
How many times was it cited and where?
If I took the time to review it, can my contribution be found and cited?
Will these efforts enhance my career prospects?
How many times was it cited and where?
None of these valid questions are impacted by an evolution away from blind or double-blind anonymous peer review, apart from the speed with which we can answer them. Transparent processes and simple web tools can filter faster, better and cheaper than journals and pre-publication peer review ever could.
This is why at ScienceOpen we’ve developed systems for Post-Publication Peer Review; Versioning; DOI allocation; Article Metrics; Collections; Open Citation Information and more – to demonstrate a different (and we would argue better) way forwards.
This inaugrual Peer Review Week, we invite you to consider this argument and disagree with us by all means. We look forward to a lively and spirited debate!
Life in California is good. Truthfully, that’s an understatement. As an ex-pat Brit, it’s great. Public holidays are rarely marred by rain; tomatoes grow outdoors (as do Oranges and Avocados); every work day is “casual Friday”.
There’s really only one downside, and that’s our time zone which means that in terms of the global conversation, we are constantly last to the party!
And so it goes with the first ever Peer Review Week. As the “lady at the helm” for social media, it’s lunchtime here in San Francisco and I am frantically trying to catch up with all the stories that everyone else has already posted.
Rather than give you an exhaustive list of the conversations and coverage, which you can see for yourself from #peerrevwk15, I am going to highlight a few that particularly stood out from me.
Listen to this podcast by Chris O’Neil from Bioscientifica which begins with a truism “none of us like the peer review process”! He goes onto explain that despite this visceral reaction, that most researchers accept that their article is improved by it.
It seems that nearly every day there’s a new online conversation about Post-Publication Peer Review (PPPR). We participate in nearly every single one because PPPR is a hallmark of what we do at ScienceOpen.
It’s probably true to say that over the course of my career (PLOS & Nature), I’ve experienced many permutations of research communication but the one that I like the best is the one we offer at ScienceOpen. Before I joined in May, the team here had quietly developed a unique and smart way to make PPPR work.
It seems that now is the right time to clearly explain ScienceOpen’s unique approach to “publish then filter”. As the debate about PPPR intensifies it’s important to understand that “not all PPPR is equal”. Here’s our recipe:
We publish articles with DOI within about a week after an initial editorial check – we don’t publish everything that we receive.
We provide proofs, basic copy-editing and language help during the production cycle – if you think all OA publishers offer this, you’d be wrong!
We facilitate Non Anonymous Post-Publication Peer Review (PPPR) from experts with at least five peer-reviewed publications per their ORCID to maintain the level of scientific discourse on the platform. We believe that those who have experienced Peer Review themselves should be more likely to understand the pitfalls of the process and offer constructive feedback to others.
We have versioning for authors who wish to respond to review feedback and revise their article.
You might think that running a publishing platform built around PPPR would keep us awake at night worrying about fake reviews and identities but oddly it doesn’t. We agree with Anurag Acharya, the co-founder of Google Scholar who stated in a recent Nature interview that when everything is visible, under your name, you can be called on it at anytime, so why risk ruining your reputation? Additionally, ORCID also has protections built into their system.
We think the ScienceOpen approach might just be the next wave of OA and there are some who agree with us. These include the experts on our Editorial and Advisory Boards such as Peter Suber, Stephen Curry, Anthony Atala, Bjorn Brembs, Raphael Levy, Philip Stark, Nick Jewell and many others.
But, what really convinces me that PPPR is the way forwards and that our method is going to give the community enough confidence to make the switch are the experiences of our authors. It’s easy to ignore yet another innovative organization telling you why their approach is the best but the voices of the community are always the most compelling.
PS If you have an article that you’d like to publish before the end of the year, there’s still time to do so with us.
One of the trickiest parts about launching anything new, also true for PLOS ONE too back in the day (hard to believe now!), is that the best way to explain what you do is to show it in action. Since we only officially launched in May, we’ve been watching some interesting use-cases develop, by which we mean ScienceOpen articles with Non-Anonymous Post-Publication Peer Review (PPPR). Even though we publish with DOI in about a week, it’s taken a little longer for the reviewers to have their say (reviews also receive a DOI), but we’re finding that what they say is well worth reading.
These articles and their associated reviews reassure us that PPPR, which some feel is still pretty radical, is a nascent but potentially healthy way to improve the way we review research. They also start to show that PPPR can benefit all sorts of research. If it can work for less spectacular, negative or contradictory research, then perhaps it will shine for once in a lifetime findings (which are of course far more rare).
What do these use-cases tell us? Mostly that its early days, so meaningful observations are perhaps premature! However, here are some thoughts:
The reviewers that are being invited to the scientific conversation are participating and broadening the debate
The reviews are respectfully delivered with a straightforward tone, even when critical (probably because they are Non-Anon)
It’s good to see papers from the medical community, arguably the quintessential OA use-case for researchers, patients, their families and friends
The reviewers are appropriately matched to the content, authors can suggest up to 10 and anyone with 5 or more publications on their ORCID iD can review any content on the platform
The authors are largely, but not exclusively, from our Editorial Boards (no surprises here since they are usually the first to support a new publishing venture and are more senior so are freer to experiment)
Reading Non-Anon PPPR is a new skill requiring balancing a scholars background with their reviews and comparing/contrasting them with those of the others
None of these authors have yet used Versioning to revise their original articles in the light of reviewer feedback received (although this article is now on version 2)
Anyways, we hope you enjoy watching how PPPR at ScienceOpen evolves as much as we do! Feel free to leave a comment on this post to continue the conversation.
In a 2002 survey performed by the Association of Learned and Professional Society Publishers (ALPSP1), 45% of the respondents expected to see some Peer Review change in the next five years – for example, journals moving to Open Post-Publication Peer Review. Although the timing of their prediction was off, it is true that there is now growing interest in this field and a few practitioners.
Driven by the fact that more and more scholarly publications are launched every year, the concept of Peer Review has been criticized for consuming too much valuable time. Moreover, Pre-Publication Peer Review and selection does not protect against fraud or misconduct. Other questions that have been raised about Peer Review include:
What does it do for Science?
What does the scientific community want it to do?
Does it illuminate good ideas or shut them down?
Should reviewers remain anonymous?
In this post, I want to explore in more detail what motivates researchers to evaluate the previously submitted work of their peers. If we can better understand the reasons why researchers review, we can also discuss scenarios which may improve both the transparency and quality of that process.
Let’s first consider what could boost the motivation of a researcher to review an article. At present there are a myriad of excuses that most of us use to put off this extra work, which usually claims several hours of an already tight time budget. The scientific community does not know or record how many hours scientists spend on Peer Review. Their institutions do not acknowledge this huge time commitment when assigning new funding. This effort is not acknowledged in a closed Peer Review system because scientists do not receive any credit for their work, as they would for a citable publication and it carries no weight when applying for a new position. This is completely different from the rewards which flow from publishing new research, in particular if that research has been published in a high-branded traditional journal.
Therefore we should ask ourselves:
Is peer review broken as a system? Yes, but many believe it is required to maintain a certain level of quality control in academia. At the very least, Pre-Publication Peer Review is a concept recognized by the scientific community as supporting rigorous communication. More coverage of the flaws within the Peer Review system is provided in this post by The Scientist.
Why do we review? A systematic survey by Sense About Science on Peer Review in 2009 represented the views of more than 4,000 researchers in all disciplines. It found that the majority of researchers agreed that reviewing meant playing their part within academic community. Review requests were hard to decline given their sense of commitment to their peers, despite the fact that they didn’t believe they would gain personal recognition for their work. The second most common motivation was to enjoy helping others to improve their article. And finally, more than two thirds of all the participating researchers like seeing new work from their peers ahead of publication. I will keep this latter point in mind for discussion later.
What sort of reward would researchers like? Having understood the main reasons why researchers agree to review, the survey asked what would further incent them to undertake this task, possibly in a timelier manner! Interestingly, almost half of the researchers said that they would be more likely to review for a journal if they received a payment (41%) or a complimentary subscription (51%, in the the days before the spread of Open Access). Despite this result, only a vanishingly small minority of journals provides any kind of payment to its reviewers. This seems even more amazing in terms of the 35-40% profit margins which are common place in for-profit scholarly journal publishing.
Given that these publishers can afford to pay, why don’t they? One acceptable answer could be that they do not want to introduce new bias into the process. Another answer is that given the number of about 1.5-2 million articles being published every year in STM disciplines as reported by Bjork et al and an average rejection rate of 50% (a factor of 2 for total number of submitted manuscripts to be reviewed) and at least two reviewers involved per paper, it would cost publishers a tidy sum to pay each reviewer a reasonable amount of money to compensate them for their considerable time.
Are there other ways to provide reviewers with credit? Acknowledgement in the journal or a formal accreditation as for example CME/CPD points could improve their motivation said a still significant percentage of researchers. However only a minority would feel motivated by the idea that their report would be published with the paper.
Half of all scientists felt that they would be rather discouraged if their identity was disclosed to all readers of the article. The other half did not feel discouraged and expected higher quality from a more open evaluation process. These findings have been reported in a study by Van Rooyen et al. who found that 55% of the responding scientists were in favor of reviewers being identified, only 25% were against it. More importantly, the authors concluded that Open Peer Review leads to better quality reviews. One reason for this conclusion is quite obvious: if both the name and the comments are disclosed to the public, it appears to be only natural that a reviewer will spend at least as much, if not more, effort to make sure that the report is as good as a scientific paper. Another reason is that the reviewer is aware that a useful report could contribute to scientific discourse much more efficiently than a short statement with a few ticks in a standard reviewers’ form which only two people can access: the journal’s editor and most likely the author. These reviewers’ comments in an open report can be read in principal by all researchers in that field and may help them to improve their own work.
In another study, which analyzed the effects of blinding on the quality of peer review, McNutt et al2 reported that reviewers who chose to sign their reviews were more constructive in their comments. In principle all new concepts which motivate reviewers to submit a review on a paper and which are not simply based on a cash-value incentive will require a disclosure of both the identity of the reviewer and the report.
Should we continue to Review? 15,000 researchers have asked this question and subsequently withdrew their services in this regard. One reason not to participate in the review process is to protest the monopoly power within the international publishing industry which led to the Elsevier boycott. Coverage of this issue can be found in the New York Times and the Cost of Knowledge.
Having asked a range of different questions above, I’d like to move on and describe the different types of Peer Review.
Disclosure of a reviewer’s identity to the public is called Open Peer-Review. This simply means that either the names or the full report for a paper will be published with the paper itself, after the peer-review process has been completed. Open non-mandatory peer review has been established for example by PLOS Medicine and the PeerJ.
Let us now imagine a more open evaluation system for research which has been introduced as Post-Publication Peer Review (PPPR). I have previously discussed the ethics of this topic on my blog Science Publishing Laboratory. Like the current system of Pre-Publication evaluation, the new system relies on reviews and ratings. However, Post-Publication Peer Review differs in two crucial respects:
Journal editors and reviewers do not decide whether or not a work will be published – as the articles are already published
Reviews take the form of public communications to the community at large, not secret missives to editors and authors. Post-Publication Peer Review is, for example, used by F1000Research. In addition, Public Post-Publication Peer Review:
Invites the scientific community to comment, review and rate a paper. The journal editor does not select the reviewers but it is instead it becomes a voluntary activity for those who feel interested and qualified to do so.
Has no limitation as to the number of reviewers, unlike other Peer Review methodologies.
Imposes no artificial time limit when the reviewing is “over”. Even years have gone by, researchers can evaluate the paper and write a review
New publishing platforms which have adopted Public Post-Publication Peer Review have been recently established by ScienceOpen and The Winnower (on the same day!).
Public Post-Publication Peer Review makes peer review more similar to getting up to comment on a talk presented at a conference. Because the reviews are transparent and do not decide the fate of a publication, they are less affected by politics. Because they are communications to the community, their power depends on how compelling their arguments are to the community. This is in contrast to secret peer review, where weak arguments can prevent publication because editors largely rely on reviewers’ judgments and the reviewers are not acting publicly before the eyes of the community. 4PR is a real discourse in science and the general research community benefit from it.
What incentives does an individual reviewer have when submitting a comment or review in a new open evaluation system?
Reviewers could be credited for their work, for example, how frequently they participated and to what extent they felt committed to playing their part as a member of the academic community. As we mentioned above, this has been a key motivation for the vast majority of researchers in terms of writing a review
Authors and Peers could comment on reviews to emphasize those which have been more useful for them than others. This establishes a rating not only for the paper itself but also for the comments and reviews which is a completely new concept in science. As a result reviewers get credited simply by the fact that peers acknowledge their work through (positive) feedback
Reviewers who contribute more frequent and constructive reviews than their colleagues within a certain area of expertise could be highlighted by a ranking. This ranking is a direct measure of the individual performance of a researcher which would be much more useful for evaluations of researchers compared to the Impact Factors of the journals in which they have published.
If a reviewer received direct feedback about the review, an open discussion could ensue which may lead to a more concentrated level of discourse, as, for example, during a conversation during a conference or a poster presentation.
And finally, if a reviewer decided to write a review, they are willing to read the paper because they are interested in the new research of one of their peers. And more importantly, they are free to decide when to submit the review. This straightforward situation is completely different to the present Pre-Publication Peer Review when a researcher is asked by an editor to 1) read and 2) review a new submission which they have never seen before.
Despite reports such as The Peer Review Process by the JISC Scholarly Communication Group that have indicated that the Peer Review process would evolve, new concepts have been introduced only rarely so far. Open Access and a transparent Open Peer-to-Peer evaluation are the prerequisites for a new peer review concept which provides more benefits for reviewers than the present review system in scholarly publishing.
With growing awareness of the dammage to the public perception of research caused by high profile retractions such as this reported in Nature, and an interesting recently observed correlation between a higher level of retraction and prestigious journals, it seems only logical that the momentum towards more transparency in research communication will grow.
Therefore we should support new ventures and publishing initiatives which have introduced principles of open evaluation and transparent reviewing. These new projects could help open our eyes to “an ideal world” as Timothy Gowers, Royal Society Mathematician and Fields Medalist summarized in his terrific vision to revolutionize scholarly communication and publishing. It will be interesting to find out how this will also improve the motivation of reviewers to do their important work of quality maintenance in academic publishing.
A note about the author:
Dr. Alexander Grossmann is a Physicist and Professor of Publishing Management at HTWK University of Applied Sciences, Leipzig (Germany). He has been working for more than 12 years in the management of publishing industry at major international companies before he co-founded ScienceOpen.
A note about future posts:
I wish to cover new services such as Academic Karma, Publons, Pub Peer and others in my next post, this one is already lengthy and I want to give their work due consideration.
(ALPSP) The Association of Learned and Professional Society Publishers (2002): Authors and Electronic Publishing. The ALPSP research study on authors’ and readers’ views on electronic research communication (ISBN 978-0-907341-23-9)
(McNutt) McNutt RA et al: The effects of blinding on the quality of peer review. A randomized trial. JAMA. 1990 Mar 9;263 (10):1371-6
There is no doubt that peer review is one of the most crucial features of scientific publishing. A scientific discovery that is written down and then hidden in a secret place is not part of science. Only if it’s made public and judged by the scientific community it enters the next stage and may become part of the scientific record. Some of those discoveries will be immediately rejected and labeled as mere bullshit, others will be accepted as proper science, though simply ignored. Again others will be regarded as useful or even celebrated as scientific breakthrough. It even happens from time to time that long ignored discoveries experience a revival and suddenly become the focus of attention – years and decades after their initial publication.
We all know how peer review works. We have done it for centuries and it has become part of our scientific culture. We’ve learned from our supervisors how it’s done properly and we’ll teach it our own PhDs as soon as we are the supervisors. Interestingly, we rarely reflect on WHY we do things. So, what we need to ask ourselves is:
“Why did we make single-blind pre-publication peer review the gold standard?”
First of all, because it has been the best way to do peer review – at least in times when manuscripts and referee reports have been sent by mail, articles have been bundled issue-wise and distributed as printed journals to libraries worldwide. It simply didn’t make sense to review after the paper was already on the shelves; and it was completely reasonable to send manuscripts to peer review first and print only those articles that have passed this initial quality test. By the way, the second and even more important quality check is still done by the whole scientific community. Reproducibility, or better the lack of it, is a big issue in empirical sciences – although having a peer reviewing system in place.
The peer review process was managed by publishing houses. They knew the secret crafts called typesetting and printing and had taken the trouble to organize the global delivery of their product called scientific journal. The money for all these services was paid by the libraries in form of journal subscription fees. Publishing was hence (almost) free for authors. Space was precious and costly. In such a system it was even more important to pre-select for those articles that are “worth to be published”. With the beneficial side effect that it positively affected the most precious selling point, the Journal Impact Factor. So, only the “best” and “most interesting” papers where selected, “not so important” sections like Material and Methods and References where shortened, and “published” became synonymous with “peer reviewed”. For a deeper analysis of the major disadvantages of the IF see Alexander’s discussion “Journal Impact Factors – Time to say goodbye?” in this blog. Another less beneficial side effect of the label “published”: We all tend to perceive papers as something that is carved in stone.
In the early 1990s, a revolution named World Wide Web started to become reality at CERN. It had the potential to change the world forever – and it truly fulfilled this promise. I think it is legitimate to say that human history can be divided into the time before and after the internet. Meanwhile, information can be stored and distributed in digital form: fast, easy, cheap and with almost no limitation. This led to a paradigm shift in scientific publishing – or as Clay Shirky puts it:
“[…] the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.”
Nevertheless, we still do peer review as we did hundred years before. Why not using the advantages of the internet when judging scientific literature? Why do we still let a handful of editors preselect papers, having the journal impact factor in mind when deciding? Why not making it public first and let the scientific community judge afterwards? Why not assessing the “impact” of each article separately instead of referring to the prestige of the journal and using the average number of citations of all articles as measure? Why not making the identity and comments of each reviewer public? Why not letting readers benefit from the reviewer’s thorough analyses and let the readers decide which information they regard as useful?
In the end the reader will have to judge the paper anyway. I think it would be best if the reader had as much information available as possible. Not as a must, but as an option. If you are content with the information “has undergone peer review”, fine. I personally would like to know: How many reviewers? Who are they? Which were the positive and which were the negative points? By no means does this information supersede my own judgment. It simply helps me to assess the quality of the review process, points me at relevant details and enables me to preselect papers by my own criteria. Nothing argues against a pool of papers of heterogeneous quality, as long as I’m able to select in a suitable way:
“Please show me only those articles with at least two positive reviews and an average rating of 80%”
And even better, reviews can now be attributed to a person. It means that you can start building up a reputation as good reviewer – in addition to being a good researcher. Furthermore, I personally would think twice before singing a review and would make sure that I have done a proper job. This does NOT mean that anonymous reviews are of lower quality. Far from it! Hundreds of thousands of conscientious reviewers are working behind closed doors to keep the system running! I simply think it’s time to reward them for their important honorable duty.
No system is perfect and each has advantages and disadvantages. The system of Public Post-Publication Peer Review we offer on our platform ScienceOpen is a step in the right direction – at least in my eyes. I cordially invite everyone to help us further improving it and try to shape it into a system that benefits everyone.