The arXiv is a server that hosts ‘eprints’ or ‘preprints’ of research papers, and is a key publishing platform for many fields, particularly physics and mathematics. Founded back in 1991 by Paul Ginsparg, it currently hosts over 1 million research articles, with more than 8000 submissions per month!
Despite now being in the running for 25 years, the arXiv still represents one of the greatest technological innovations to utilise the Web for scholarly communication.
While the majority of the content submitted to the arXiv is subsequently also submitted to traditional journals for publication, there is still content which never goes beyond its confines. Irrespective of this, communities engaged with the arXiv still cite articles published there, whether or not they have been formally published in a journal elsewhere.
This is the whole purpose of the arXiv: to facilitate rapid peer-to-peer communication so that science accelerates faster. The fact that all articles are publicly available is incidental, and just happens to be a topic of major interest with the growing open access movement.
However, the arXiv is not peer reviewed in the formal sense. It is moderated, so that junk submissions can be removed, or manuscripts recategorised, but it lacks the additional layer of quality control of traditional peer review.
So while some might think this poses a risk, ask yourself this question: do you re-use articles critical to your research without making sure that you have checked and understand the research to a sufficient degree that you can appropriately cite it? Because that’s peer review, that is, and it applies irrespective of whether an article has already been peer reviewed or not.
The Zika virus is an international public health emergency, as declared early on in February by the World Health Organisation. As such, it is critical that the global research community help combat this threat as rapidly and efficiently as possible. This is a case when science can quite literally save lives.
Recently, an article on the host-vector ratio in the Zika virus was published on the arXiv, a platform for articles often called ‘preprints’. This means that the work has not yet been peer reviewed, and is also not available to comment on the arXiv itself due to functional constraints. The paper is stuck in the hidden, timeless limbo of peer review until its eventual emergence as a paper or ultimate rejection.
Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.
Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.
This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.
Open peer review has many different aspects, and is not simply about removing anonymity from the process. Open peer review forms part of the ongoing evolution of an open research system, and the transformation of peer review into a more constructive and collaborative process. The ultimate goal of traditional peer review remains the same – to make sure that the work of authors gets published to an acceptable standard of scientific rigour.
There are different levels of bi-directional anonymity throughout the peer review process, including whether or not the referees know who the authors are but not vice versa (single blind review), or whether both parties remain anonymous to each other (double blind review). Open peer review is a relatively new phenomenon (initiated in 1999 by the BMJ) one aspect of which is that the authors and referees names are disclosed to each other. The foundation of open peer review is based on transparency to avoid competition or conflicts born out through the fact that those who are performing peer review will often be the closest competitors to the authors, as they will tend to be the most competent to assess the research.
For the majority of scientists, peer review is seen as integral to, and a fundamental part of, their job as a researcher. To be invited to review a research article is perceived as a great honour due to its recognition of expertise, and forms part of the duty of a scientist to help progress research. However, the system is in a bit of a fix. With more and more being published every year and ever increasing demands on the time and funds of researchers, the ability to competently perform peer review is dwindling simply due to competition with other aspects of duty. Why, many researchers might ask, should they spend their valuable time reviewing others work for little to no recognition or reward, as is with the traditional model? Indeed, many publishers opine that the greatest value they add is through managing the peer review process, which in many cases is performed on a volunteer basis by academic Editors and referees, and estimated to cost around $1.9 billion in management per year. But who actually gets the recognition and credit for all of this work?
It’s not too hard to see that the practices of and attitudes towards ‘open science’ are evolving amidst an ongoing examination about what the modern scholarly system should look like. While we might be more familiar with the ongoing debate about how to best implement open access to research articles and to the data behind publications, discussions regarding the structure, management, and process of peer review are perhaps more nuanced, but arguably of equal or greater significance.
Peer review is of enormous importance for managing the content of the published scientific record and the careers of the scientists who produce it. It is perceived as the golden standard of scholarly publishing, and for many determines whether or not research can be viewed as scientifically valid. Accordingly, peer review is a vital component at the core of the process of research communication, with repercussions for the very structure of academia which largely operates through a publication-based reward and incentive system.
Openness in scholarly communication takes many forms. One of the most commonly debated in academic spheres is undoubtedly open access – the free, equal, and unrestricted access to research papers. As well as open access, there are also great pushes being made in the realms of open data and open metrics. Together, these all come under an umbrella of ‘open research’.
One important aspect of open research is peer review. At ScienceOpen, we advocate maximum transparency in the peer review process, based on the concept that research should be an open dialogue and not locked away in the dark. We have two main peer review initiatives for our content: peer review by endorsement, and post-publication peer review.
A new project has been launched recently, the Peer Reviewers Openness Initiative (PROI). Similarly to ScienceOpen, is grounded in the belief that openness and transparency are core values of science. The core of the initiative is to encourage reviewers of research papers to make open practices a pre-condition for a more comprehensive review process. You can read more about the Initiative here in a paper (open access, obviously) published via the Royal Society.
Data should be made publicly available.All data needed for evaluation and reproduction of the published research should be made publicly available, online, hosted by a reliable third party. [I’m an author; help me comply!]
Stimuli and materials should be made publicly available.Stimulus materials, experimental instructions and programs, survey questions, and other similar materials should be made publicly available, hosted by a reliable third party. [I’m an author; help me comply!]
In case some data or materials are not open, clear reasons (e.g., legal, ethical constraints, or severe impracticality) should be given why. These reasons should be outlined in the manuscript.[I’m an author; help me comply!]
Documents containing details for interpreting any files or code, and how to compile and run any software programs should be made available with the above items.In addition, licensing or other restrictions on their use should be made clear. [I’m an author; help me comply!]
The location of all of these files should be advertised in the manuscript, and all files should be hosted by a reliable third party.The choice of online file hosting should be made to maximize the probability that the files will be accessible for many years, and to minimize the probability that they will be lost for trivial reasons (e.g., accidental deletions, moving files). [I’m an author; help me comply!]
Stephanie Dawson, CEO of ScienceOpen, and Jon Tennant, Communications Director, have signed the PROI, both on behalf of ScienceOpen and independently, respectively, joining more than 200 other researchers to date. Joining only takes a few seconds of your time, and would help to solidify a real commitment to making the peer review process more transparent, and helping to realise the wider goal of an open research environment.
Most of us, whether we are researchers or not, can intuitively grasp what “profile fatigue” is. For those who are thus afflicted, we don’t recommend the pictured Bromo Soda, even though it’s for brain fatigue. This is largely because it contained Bromide, which is chronically toxic and medications containing it were removed in the USA from 1975 (wow, fairly recent!).
Naturally, in the digital age, it’s important for researchers to have profiles and be associated with their work. Funding, citations and lots of other good career advancing benefits flow from this. And, it can be beneficial to showcase a broad range of output, so blogs, slide presentations, peer-reviewed publications, conference posters etc. are all fair game. It’s also best that a researcher’s work belongs uniquely to them, so profile systems need to solve for name disambiguation (no small undertaking!).
This is all well and good until you consider the number of profiles a researcher might have created at different sites already. To help us consider this, we put together this list.
Non-profit: independent, community driven
Publisher: Thomson Reuters
Scopus Author ID
Researcher Network: Academia.edu
Researcher Network: ResearchGate
The list shows that a researcher could have created (or have been assigned per SCOPUS) 7 “profiles” or more accurately, 7 online records of research contributions. That’s on top of those at their research institution and other organizations) and only one iD (helpfully shown in green at the top!) is run by an independent non-profit called ORCID.
Different from a profile, ORCID is a unique, persistent personal identifier a researcher uses as they publish, submit grants, upload datasets that connects them to information on other systems. But, not all other profile systems (sigh). Which leads us, once again, to the concept of “interoperability” which is one of the central arguments behind recent community disatissfaction over the new STM licenses which we have covered previously.
Put simply, if we all go off and do our own thing with licensing and profiling then we create more confusion and effort for researchers. Best to let organizations like Creative Commons and ORCID take care of making sure that everyone can play nicely in the sandbox (although they do appreciate community advocacy on these issues).
Interoperability is one good reason why ScienceOpen integrated our registration with ORCID and use their iD’s to provide researcher profiles on our site. We don’t do this because we think profiles are kinda neat, they are but they are also time consuming and tedious to prepare (especially 6 times!).
We did it because we are trying to improve peer-review which we believe should be done after publication by experts with at least 5 publications on their ORCID iD and we believe in minimizing researcher hassle. This is why our registration process is integrated with the creation of an ORCID iD, which could become pivotal for funders in the reaonably near future (so best for researchers to get on board with them now!).
So given that it seems likely that all researchers will need an ORCID iD (and boy it would be nice if they would get one by registering with us!), then what is also important is that all the sites listed in the above grid integrate with ORCID too and that hasn’t happened yet (you know who you are!). The others have done a nice job of integrating by all accounts.
In conclusion, publishers and other service providers need to remember that they serve the scientific community, not the other way around and this publisher would like to suggest that everyone in the grid please integrate with ORCID pronto!
Reviewing with ScienceOpen, the new OA research + publishing network, is a bit different from what researchers may have experienced elsewhere! To see for yourself, watch this short video on Post-Publication Peer Review.
Q. For busy researchers & physicians, time is short, so why bother to review for ScienceOpen?
A1. Firstly, because the current Peer Review system doesn’t work
David Black, the Secretary General of the International Council for Science (ICSU) said in a recent ScienceOpen interview “Peer Review as a tool of evaluation for research is flawed.” Many others agree.
Here are our observations and what we are doing to ease the strain.
Anonymous Peer Review encourages disinhibition. Since the balance of power is also skewed, this can fuel unhelpful, even destructive, reviewer comments. At ScienceOpen, we only offer non-anonymous Post-Publication Peer Review.
Authors can suggest up to 10 people to review their article. Reviews of ScienceOpen articles and any of the 1.3mm other OA papers aggregated on our platform, are by named academics with minimally five publications on their ORCID ID which is our way of maintaining the standard of scientific discourse. We believe that those who have experienced Peer Review themselves should be more likely to understand the pitfalls of the process and offer constructive feedback to others.
Martin Suhm, Professor of Physical Chemistry, Georg-August-Universität Göttingen, Germany and one of our first authors said in a recent ScienceOpen interview “Post-Publication Peer Review will be an intriguing experience, certainly not without pitfalls, but worth trying”.
A2. Second, reviews receive a DOI so your contributions can be cited
We believe that scholarly publishing is not an end in itself, but the beginning of a dialogue to move research forward. In a move sure to please busy researchers tired of participating without recognition, each review receives a Digital Object Identifier (DOI) so that others can find and cite the analysis and the contribution becomes a registered part of the scientific debate.
All reviews require a four point assessment (using five stars) of the level of: importance, validity, completeness and comprehensibility and there’s space to introduce and summarize the material.
Should authors wish to make minor or major changes to their work in response to review feedback, then ScienceOpen offers Versioning. Versions are clearly visible online, the latest are presented first with prominent links to previous iterations. We maintain & display information about which version of an article the reviews and comments refer to, this allows readers to follow a link to an earlier version of the content to see the article history.
A3. Finally, because problems are more visible
When Peer Review is done in the open by named individuals, we believe it should be more constructive and issues will surface more quickly. The resolution of matters arising isn’t simpler or quicker because they are more obvious, but at least they can be seen and addressed.
Here’s a quick overview of ScienceOpen services:
Publishes ALL article types: Research, Reviews, Opinions, Posters etc
From ALL disciplines: science, medicine, the humanities and social science
Aggregates over 1.3 million OA articles from leading publishers
Publication within about a week from submission with DOI