Tag: Post-Publication

In:  About SO  

ScienceOpen launches new search capabilities

At ScienceOpen, we’ve just upgraded our search and discovery platform to be faster, smarter, and more efficient. A new user interface and filtering capabilities provide a better discovery experience for users. ScienceOpen searches more than 27 million full text open access or article metadata records and puts them in context. We include peer-reviewed academic articles from all fields, including pre-prints that we draw from the arXiv and which are explicitly tagged as such.

The current scale of academic publishing around the world is enormous. According to a recent STM report, we currently publish around 2.5 million new peer reviewed articles every single year, and that’s just in English language journals.

The problem with this for researchers and more broadly is how to stay up to date with newly published research. And not just in our own fields, but in related fields too. Researchers are permanently inundated, and we need to find a way to sift the wheat from the chaff.

The solution is smart and enhanced search and discovery. Platforms like ResearchGate and Google Scholar (GS) have just a single layer of discovery, with additional functions such as sorting by date to help narrow things down a bit. GS is the de facto mode of discovery of primary research for most academics, but it also contains a whole slew of ‘grey literature’ (i.e., non-peer reviewed outputs), which often interferes with finding the best research.

As well as this, if you do a simple search with GS, say just for dinosaurs, you get 161,000 returned results. How on Earth are you supposed to find the most useful and most relevant research based on this if you want to move beyond Google’s page rank, especially if you’re entering this from outside the area of specialisation? Simply narrowing down by dates does very little to prevent being overwhelmed with an absolute deluge of maybe maybe-not relevant literature. We need to do better at research discovery.

Continue reading “ScienceOpen launches new search capabilities”  

In:  Peer Review  

Disambiguating post-publication peer review

Guest post by Tony Ross-Hellauer, Scientific Manager of OpenAIRE (email ross-hellauer@sub.uni-goettingen.de). Originally posted on the OpenAIRE blog. Re-posted with permission under a CC BY license.


NOTE: OpenAIRE would like to know what you think about open peer review! Have your say here until 7th October! 

Tl;dr – “Post-publication peer review” (PPPR) has gained a lot of traction in recent years. As with much of peer review’s confusing lexicon, however, this term is ambiguous. This ambiguity stems from confusion over what constitutes “publication” in the digital age. PPPR conflates two distinct phenomena, which we would do better to treat separately, namely “open pre-review manuscripts” and “open final-version commenting”.

What is “post-publication peer review”?

Peer review can have two senses, one specific and the other more general. “Peer Review” (henceforth PR) is a well-defined publishing practice for the quality assurance of research articles and other academic outputs. It is intimately tied to the publication process. It traditionally begins when an editor sends a manuscript to reviewers and ends when the editor accepts a manuscript for publication. But “peer review” (lower-case, henceforth “pr”) is just the critique and appraisal of ideas, theories, and findings by those with particular insight into a topic. Such feedback happens all the time. It happens before manuscripts are submitted: in colleagues’ initial reactions (positive or negative) to a new idea, feedback gained from conferences, lectures, seminars and late-night bull sessions, or private comments on late-stage first-draft manuscripts from trusted peers. And it continues after the article’s appearance in a journal, via a multitude of channels through which readers can give feedback, including comment sections on journal websites, dedicated channels for post-publication commentary, blogs and social media, and of course in future research that cites and comments back on the findings.

Continue reading “Disambiguating post-publication peer review”  

In:  Other  

Review Instructions for ScienceOpen

At ScienceOpen, you can peer review any of 60 million research articles (and climbing every day!). That’s right! Any one you want. Even if an article has been published and ‘passed’ peer review, you can still comment on it. The only reason there would ever be no value in doing this would be if all published work were completely infallible, which is clearly not the case.

To review an article, you must create a LOGIN ID for ScienceOpen and ORCID by following the instructions here.

The ScienceOpen community has agreed to only allow formal peer reviews from ScienceOpen members that have published at least 5 articles in peer reviewed journals. For this reason, please do not forget to add your publication history on ORCID. Only if you do not have an ORCID account can a peer review manager at ScienceOpen set up an account for you. If you would like for someone to do this, please send an email to dan.cook@scienceopen.com before proceeding. Put the phrase “Create accounts” in the subject line, and put your name and email address in the body of the email.

Continue reading “Review Instructions for ScienceOpen”  

In:  Peer Review  

Pre- or post-publication peer review

Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.

Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.

A potential future system (source)
A potential future system of peer review (source)

Continue reading “Pre- or post-publication peer review”  

In:  Profiles  

Researcher #profilefatigue – what it is and why it’s exhausting!

Image credit: Arallyn, Flickr, CC BY
Image credit: Arallyn, Flickr, CC BY

Most of us, whether we are researchers or not, can intuitively grasp what “profile fatigue” is. For those who are thus afflicted, we don’t recommend the pictured Bromo Soda, even though it’s for brain fatigue. This is largely because it contained Bromide, which is chronically toxic and medications containing it were removed in the USA from 1975 (wow, fairly recent!).

Naturally, in the digital age, it’s important for researchers to have profiles and be associated with their work. Funding, citations and lots of other good career advancing benefits flow from this. And, it can be beneficial to showcase a broad range of output, so blogs, slide presentations, peer-reviewed publications, conference posters etc. are all fair game. It’s also best that a researcher’s work belongs uniquely to them, so profile systems need to solve for name disambiguation (no small undertaking!).

This is all well and good until you consider the number of profiles a researcher might have created at different sites already. To help us consider this, we put together this list.

Organization Status
ORCID Non-profit: independent, community driven
Google Scholar Search: Google
Researcher ID Publisher: Thomson Reuters
Scopus Author ID Publisher: Elsevier
Mendeley Publisher: Elsevier
Academia.edu Researcher Network: Academia.edu
ResearchGate Researcher Network: ResearchGate

The list shows that a researcher could have created (or have been assigned per SCOPUS) 7 “profiles” or more accurately, 7 online records of research contributions. That’s on top of those at their research institution and other organizations) and only one iD (helpfully shown in green at the top!) is run by an independent non-profit called ORCID.

Different from a profile, ORCID is a unique, persistent personal identifier a researcher uses as they publish, submit grants, upload datasets that connects them to information on other systems. But, not all other profile systems (sigh). Which leads us, once again, to the concept of “interoperability” which is one of the central arguments behind recent community disatissfaction over the new STM licenses which we have covered previously.

Put simply, if we all go off and do our own thing with licensing and profiling then we create more confusion and effort for researchers. Best to let organizations like Creative Commons and ORCID take care of making sure that everyone can play nicely in the sandbox (although they do appreciate community advocacy on these issues).

Interoperability is one good reason why ScienceOpen integrated our registration with ORCID and use their iD’s to provide researcher profiles on our site. We don’t do this because we think profiles are kinda neat, they are but they are also time consuming and tedious to prepare (especially 6 times!).

We did it because we are trying to improve peer-review which we believe should be done after publication by experts with at least 5 publications on their ORCID iD and we believe in minimizing researcher hassle. This is why our registration process is integrated with the creation of an ORCID iD, which could become pivotal for funders in the reaonably near future (so best for researchers to get on board with them now!).

So given that it seems likely that all researchers will need an ORCID iD (and boy it would be nice if they would get one by registering with us!), then what is also important is that all the sites listed in the above grid integrate with ORCID too and that hasn’t happened yet (you know who you are!). The others have done a nice job of integrating by all accounts.

In conclusion, publishers and other service providers need to remember that they serve the scientific community, not the other way around and this publisher would like to suggest that everyone in the grid please integrate with ORCID pronto!

In:  About SO  

ScienceOpen – making publishing easier. Why review?

Image credit: AJ Cann/Flickr, CC BY-SA
Image credit: AJ Cann/Flickr, CC BY-SA

Reviewing with ScienceOpen, the new OA research + publishing network, is a bit different from what researchers may have experienced elsewhere! To see for yourself, watch this short video on Post-Publication Peer Review.

Q. For busy researchers & physicians, time is short, so why bother to review for ScienceOpen?

A1. Firstly, because the current Peer Review system doesn’t work 

David Black, the Secretary General of the International Council for Science (ICSU) said in a recent ScienceOpen interview “Peer Review as a tool of evaluation for research is flawed.” Many others agree.

Here are our observations and what we are doing to ease the strain.

Anonymous Peer Review encourages disinhibition. Since the balance of power is also skewed, this can fuel unhelpful, even destructive, reviewer comments. At ScienceOpen, we only offer non-anonymous Post-Publication Peer Review.

Authors can suggest up to 10 people to review their article. Reviews of ScienceOpen articles and any of the 1.3mm other OA papers aggregated on our platform, are by named academics with minimally five publications on their ORCID ID which is our way of maintaining the standard of scientific discourse. We believe that those who have experienced Peer Review themselves should be more likely to understand the pitfalls of the process and offer constructive feedback to others.

Martin Suhm, Professor of Physical Chemistry, Georg-August-Universität Göttingen, Germany and one of our first authors said in a recent ScienceOpen interview “Post-Publication Peer Review will be an intriguing experience, certainly not without pitfalls, but worth trying”.

A2. Second, reviews receive a DOI so your contributions can be cited

We believe that scholarly publishing is not an end in itself, but the beginning of a dialogue to move research forward. In a move sure to please busy researchers tired of participating without recognition, each review receives a Digital Object Identifier (DOI) so that others can find and cite the analysis and the contribution becomes a registered part of the scientific debate.

All reviews require a four point assessment (using five stars) of the level of: importance, validity, completeness and comprehensibility and there’s space to introduce and summarize the material.

Should authors wish to make minor or major changes to their work in response to review feedback, then ScienceOpen offers Versioning. Versions are clearly visible online, the latest are presented first with prominent links to previous iterations. We maintain & display information about which version of an article the reviews and comments refer to, this allows readers to follow a link to an earlier version of the content to see the article history.

A3. Finally, because problems are more visible

When Peer Review is done in the open by named individuals, we believe it should be more constructive and issues will surface more quickly. The resolution of matters arising isn’t simpler or quicker because they are more obvious, but at least they can be seen and addressed.

Here’s a quick overview of ScienceOpen services:

  • Publishes ALL article types: Research, Reviews, Opinions, Posters etc
  • From ALL disciplines: science, medicine, the humanities and social science
  • Aggregates over 1.3 million OA articles from leading publishers
  • Publication within about a week from submission with DOI
  • Transparent Post-Publication Peer Review with DOI
  • Proofs, easy corrections and versioning
  • Article Metrics to track usage and impact
  • Compliant with all Funder OA mandates (CC BY)

Welcome to the next wave of Open Access Publishing. Join us today.

 

ScienceOpen Author Interview Series – Martin Suhm

As a newcomer to the OA publishing scene, ScienceOpen thought it would be fascinating to profile the scientists who are choosing to publish with us. We’re delighted to welcome expert member Martin Suhm ( http://goo.gl/bEbm89 ) – Professor of Physical Chemistry, Georg-August-Universität Göttingen, Germany – to our Research + Publishing Network.

Martin is an established figure who contributes to the German scientific community through his membership to Leopoldina, the German National Academy of Science and the Committee for the Allocation of Alexander von Humboldt Foundation Research. He is also Continue reading “ScienceOpen Author Interview Series – Martin Suhm”  

In:  Peer Review  

Peer Review 2.0

There is no doubt that peer review is one of the most crucial features of scientific publishing. A scientific discovery that is written down and then hidden in a secret place is not part of science. Only if it’s made public and judged by the scientific community it enters the next stage and may become part of the scientific record. Some of those discoveries will be immediately rejected and labeled as mere bullshit, others will be accepted as proper science, though simply ignored. Again others will be regarded as useful or even celebrated as scientific breakthrough. It even happens from time to time that long ignored discoveries experience a revival and suddenly become the focus of attention – years and decades after their initial publication.

We all know how peer review works. We have done it for centuries and it has become part of our scientific culture. We’ve learned from our supervisors how it’s done properly and we’ll teach it our own PhDs as soon as we are the supervisors. Interestingly, we rarely reflect on WHY we do things. So, what we need to ask ourselves is:

“Why did we make single-blind pre-publication peer review the gold standard?”

First of all, because it has been the best way to do peer review – at least in times when manuscripts and referee reports have been sent by mail, articles have been bundled issue-wise and distributed as printed journals to libraries worldwide. It simply didn’t make sense to review after the paper was already on the shelves; and it was completely reasonable to send manuscripts to peer review first and print only those articles that have passed this initial quality test. By the way, the second and even more important quality check is still done by the whole scientific community. Reproducibility, or better the lack of it, is a big issue in empirical sciences – although having a peer reviewing system in place.

The peer review process was managed by publishing houses. They knew the secret crafts called typesetting and printing and had taken the trouble to organize the global delivery of their product called scientific journal. The money for all these services was paid by the libraries in form of journal subscription fees. Publishing was hence (almost) free for authors. Space was precious and costly. In such a system it was even more important to pre-select for those articles that are “worth to be published”. With the beneficial side effect that it positively affected the most precious selling point, the Journal Impact Factor. So, only the “best” and “most interesting” papers where selected, “not so important” sections like Material and Methods and References where shortened, and “published” became synonymous with “peer reviewed”. For a deeper analysis of the major disadvantages of the IF see Alexander’s discussion “Journal Impact Factors – Time to say goodbye?” in this blog. Another less beneficial side effect of the label “published”: We all tend to perceive papers as something that is carved in stone.

In the early 1990s, a revolution named World Wide Web started to become reality at CERN. It had the potential to change the world forever – and it truly fulfilled this promise. I think it is legitimate to say that human history can be divided into the time before and after the internet. Meanwhile, information can be stored and distributed in digital form: fast, easy, cheap and with almost no limitation. This led to a paradigm shift in scientific publishing – or as Clay Shirky puts it:

“[…] the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.”

Nevertheless, we still do peer review as we did hundred years before. Why not using the advantages of the internet when judging scientific literature? Why do we still let a handful of editors preselect papers, having the journal impact factor in mind when deciding? Why not making it public first and let the scientific community judge afterwards? Why not assessing the “impact” of each article separately instead of referring to the prestige of the journal and using the average number of citations of all articles as measure? Why not making the identity and comments of each reviewer public? Why not letting readers benefit from the reviewer’s thorough analyses and let the readers decide which information they regard as useful?

In the end the reader will have to judge the paper anyway. I think it would be best if the reader had as much information available as possible. Not as a must, but as an option. If you are content with the information “has undergone peer review”, fine. I personally would like to know: How many reviewers? Who are they? Which were the positive and which were the negative points? By no means does this information supersede my own judgment. It simply helps me to assess the quality of the review process, points me at relevant details and enables me to preselect papers by my own criteria. Nothing argues against a pool of papers of heterogeneous quality, as long as I’m able to select in a suitable way:

“Please show me only those articles with at least two positive reviews and an average rating of 80%”

And even better, reviews can now be attributed to a person. It means that you can start building up a reputation as good reviewer – in addition to being a good researcher. Furthermore, I personally would think twice before singing a review and would make sure that I have done a proper job. This does NOT mean that anonymous reviews are of lower quality. Far from it! Hundreds of thousands of conscientious reviewers are working behind closed doors to keep the system running! I simply think it’s time to reward them for their important honorable duty.

No system is perfect and each has advantages and disadvantages. The system of Public Post-Publication Peer Review we offer on our platform ScienceOpen is a step in the right direction – at least in my eyes. I cordially invite everyone to help us further improving it and try to shape it into a system that benefits everyone.