At ScienceOpen, the research + publishing network, we’re enjoying some of the upsides of being the new kid on the Open Access (OA) block. Innovation and building on the experiments of others is easier when there’s less to lose but we are also the first to admit that life as a start-up is not for the faint hearted!
In the years since user generated comments and reviews were first introduced, those of us who strive to improve research communication have wrestled with questions such as: potential for career damage; content for peer and public audiences; comments from experts, everyone or a mix and lower than anticipated participation.
We want to acknowledge the many organizations who have done a tremendous job at showing different paths forward in this challenging space. Now it’s our turn to try.
Since launch, ScienceOpen has assigned members different user privileges based on their previous publishing history as verified by their ORCID ID. This seemed like a reasonable way to measure involvement in the field and provided the right level of publishing experience to understand the pitfalls of the process. This neat diagram encapsulates how it works.
Scientific and Expert Members of ScienceOpen can review all the content on the site which includes 1.3million+ OA articles and a very small number of our own articles (did we mention, we’re new!).
All reviews require a four point assessment (using five stars) of the level of: importance, validity, completeness and comprehensibility and there’s space to introduce and summarize the material. Inline annotation captures reviewer feedback during reading. Next up in the site release cycle, mechanisms to make it easy for authors to respond to in-line observations.
In a move sure to please busy researchers tired of participating without recognition, each review, including the subsequent dialogue, receives a Digital Object Identified (DOI) so that others can find and cite the analysis and the contribution becomes a registered part of the scientific debate.
Welcome to our wonderful world of Reviewing! Please share your feedback here or @Science_Open.
David Black is Secretary General of the International Council for Science (ICSU) and Professor of Organic Chemistry at the University of New South Wales, Australia. An advocate of Open Access for scientific data in his role at ICSU, Professor Black is a proponent of the initiatives of ICSU and ICSU-affiliate groups, such as the Committee on Freedom and Responsibility in the Conduct of Science (CFRS), the ICSU-World Data System (ICSU-WDS), the International Council for Scientific and Technical Information (ICSTI), the ICSU’s Strategic Coordinating Committee on Information and Data (SCCID), Continue reading “ScienceOpen Interview with David Black, Secretary General, International Council for Science.”
ScienceOpen continues our series of interviews with our new authors with Professor Lorenzo Iorio (https://www.scienceopen.com/profile/lorenzo_iorio ), who has just published an article on ScienceOpen entitled „Orbital effects of a monochromatic plane gravitational wave with ultra-low frequency incident on a gravitationally bound two-body system.“ ( http://goo.gl/kCYgwd )
As a newcomer to the OA publishing scene, ScienceOpen thought it would be fascinating to profile the scientists who are choosing to publish with us. We’re delighted to welcome expert member Martin Suhm ( http://goo.gl/bEbm89 ) – Professor of Physical Chemistry, Georg-August-Universität Göttingen, Germany – to our Research + Publishing Network.
Over the last few days I attended the Spring Meeting of the German Physical Society (DPG) in Berlin. Physicists are considered sometime as a very special species among scientists, and not only because the characters introduced in the “Big Bang Theory” sitcom. Physicists developed the World Wide Web in the late eighties which became the starting point for all internet activities today. In 1991 Paul Ginsparg started to post preprints of research articles in a repository at Los Alamos National Laboratory which is known as “arXiv” ( www.arXiv.org ) which now consists of more than Continue reading “Open Access in Physics: Do we need something outside the arXiv?”
There is no doubt that peer review is one of the most crucial features of scientific publishing. A scientific discovery that is written down and then hidden in a secret place is not part of science. Only if it’s made public and judged by the scientific community it enters the next stage and may become part of the scientific record. Some of those discoveries will be immediately rejected and labeled as mere bullshit, others will be accepted as proper science, though simply ignored. Again others will be regarded as useful or even celebrated as scientific breakthrough. It even happens from time to time that long ignored discoveries experience a revival and suddenly become the focus of attention – years and decades after their initial publication.
We all know how peer review works. We have done it for centuries and it has become part of our scientific culture. We’ve learned from our supervisors how it’s done properly and we’ll teach it our own PhDs as soon as we are the supervisors. Interestingly, we rarely reflect on WHY we do things. So, what we need to ask ourselves is:
“Why did we make single-blind pre-publication peer review the gold standard?”
First of all, because it has been the best way to do peer review – at least in times when manuscripts and referee reports have been sent by mail, articles have been bundled issue-wise and distributed as printed journals to libraries worldwide. It simply didn’t make sense to review after the paper was already on the shelves; and it was completely reasonable to send manuscripts to peer review first and print only those articles that have passed this initial quality test. By the way, the second and even more important quality check is still done by the whole scientific community. Reproducibility, or better the lack of it, is a big issue in empirical sciences – although having a peer reviewing system in place.
The peer review process was managed by publishing houses. They knew the secret crafts called typesetting and printing and had taken the trouble to organize the global delivery of their product called scientific journal. The money for all these services was paid by the libraries in form of journal subscription fees. Publishing was hence (almost) free for authors. Space was precious and costly. In such a system it was even more important to pre-select for those articles that are “worth to be published”. With the beneficial side effect that it positively affected the most precious selling point, the Journal Impact Factor. So, only the “best” and “most interesting” papers where selected, “not so important” sections like Material and Methods and References where shortened, and “published” became synonymous with “peer reviewed”. For a deeper analysis of the major disadvantages of the IF see Alexander’s discussion “Journal Impact Factors – Time to say goodbye?” in this blog. Another less beneficial side effect of the label “published”: We all tend to perceive papers as something that is carved in stone.
In the early 1990s, a revolution named World Wide Web started to become reality at CERN. It had the potential to change the world forever – and it truly fulfilled this promise. I think it is legitimate to say that human history can be divided into the time before and after the internet. Meanwhile, information can be stored and distributed in digital form: fast, easy, cheap and with almost no limitation. This led to a paradigm shift in scientific publishing – or as Clay Shirky puts it:
“[…] the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.”
Nevertheless, we still do peer review as we did hundred years before. Why not using the advantages of the internet when judging scientific literature? Why do we still let a handful of editors preselect papers, having the journal impact factor in mind when deciding? Why not making it public first and let the scientific community judge afterwards? Why not assessing the “impact” of each article separately instead of referring to the prestige of the journal and using the average number of citations of all articles as measure? Why not making the identity and comments of each reviewer public? Why not letting readers benefit from the reviewer’s thorough analyses and let the readers decide which information they regard as useful?
In the end the reader will have to judge the paper anyway. I think it would be best if the reader had as much information available as possible. Not as a must, but as an option. If you are content with the information “has undergone peer review”, fine. I personally would like to know: How many reviewers? Who are they? Which were the positive and which were the negative points? By no means does this information supersede my own judgment. It simply helps me to assess the quality of the review process, points me at relevant details and enables me to preselect papers by my own criteria. Nothing argues against a pool of papers of heterogeneous quality, as long as I’m able to select in a suitable way:
“Please show me only those articles with at least two positive reviews and an average rating of 80%”
And even better, reviews can now be attributed to a person. It means that you can start building up a reputation as good reviewer – in addition to being a good researcher. Furthermore, I personally would think twice before singing a review and would make sure that I have done a proper job. This does NOT mean that anonymous reviews are of lower quality. Far from it! Hundreds of thousands of conscientious reviewers are working behind closed doors to keep the system running! I simply think it’s time to reward them for their important honorable duty.
No system is perfect and each has advantages and disadvantages. The system of Public Post-Publication Peer Review we offer on our platform ScienceOpen is a step in the right direction – at least in my eyes. I cordially invite everyone to help us further improving it and try to shape it into a system that benefits everyone.
2014: The year is off to a good start for the Open Access movement. In the US, Congress passed legislation to require that all research funded by public funding bodies be freely accessibly, at least in the author’s final version and with a 12 month embargo after publication. (Peter Suber has a good summary of the legislation in his blog: http://goo.gl/Pmlkg1 ) Will this continue a trend started by the National Institute of Health and its public access database PubMedCentral (PMC –http://www.ncbi.nlm.nih.gov/pmc ) to increasingly direct readers to the pre-typeset version of an article? Phil Davis of the Continue reading “2014 – A good year for Open Access publishing?”