Blog
About

In:  Licenses  

If it ain’t broke… #NoNewLicenses!

Image credit: Frustration, by Eric (e-magic), CC BY-ND. https://www.flickr.com/photos/emagic/
Image credit: Frustration, by Eric (e-magic), CC BY-ND. https://www.flickr.com/photos/emagic/

On August 7th, the International Association of Scientific, Technical and Medical Publishers  (STM) responded to a call from the Global Coalition of Access to Research, Science and Education Organizations (signed by more than 80 entities and counting, including ScienceOpen) to withdraw their new model licenses.

So what exactly is all the fuss about? Our headline pretty much sums it up and comes courtesy of OA advocate Graham Steel who rightly observed that “if it ain’t broke, don’t fix it”.

The Open Access (OA) community happily relies on a license suite from Creative Commons (CC) to provide an interoperable and simple standard for our industry. ScienceOpen uses the most flexible CC “attribution license”, known to its many friends as CC-BY. We like it because it allows maximum scope for the creative re-use of research, including commercially, no permission from us required. The only caveat is “credit where credit is due” which only seems fair and means that the original authors and source must be cited, together with the license type and ideally a link to the work. We believe that research works better and faster without any limitations and that CC-BY facilitates this.

So why does the STM Association, the “voice of academic and professional publishing” (but not ours, we’re not members), think that we need new licenses? One reason they give in their response is that “Creative Commons (CC) licenses are designed to be used across the entire creative sector, and are not specifically designed for academic and scholarly publishing”. Sadly, this demonstrates a lack of understanding of the true power of the licenses which comes precisely because they were developed for use across different creative industries.

This is very important for those who work in scientific communication whose job it is to explain science to a broader audience. Research is frequently complex and mashing it up with CC-BY images (of which there are over 58 million at the photo sharing site Flickr), Wikipedia links (over 4 million CC articles) even music can really bring a story to life. Making this content freely available under a CC license is important because advances in science and medicine should ideally be available to and understood by everyone.

For those who have a role in this field, the reality is that navigating and selecting the best content from the overwhelming volume on the internet and then complying with the current dizzying array of more and less restrictive copyright licenses is already quite tricky enough, thank you kindly! We simply don’t need any more complexity.

In:  Announcements  

If you get it wrong you’ll get it right next time – versioning now live

Image credit: Editing a paper, Nic McPhee, Flickr, CC BY-SA
Image credit: Editing a paper, Nic McPhee, Flickr, CC BY-SA

While I am sitting on the sofa composing this blog post, WordPress is seamlessly taking care of my corrections, proofs and versions at the touch of a button. I take this service completely for granted am grateful for it since I usually run through quite a few drafts before I am satisfied.

The time and accuracy necessary to compose a thoughtful research article, which should be replicable and on which others may choose to build, is far greater than the effort I am expending here.

Correcting the scientific literature is therefore rightly more complex and the changes more meaningful, but surely the services offered to scientists should at least match those that are available through this free blogging software?

Sadly, this is not always the case in scientific publishing, where some authors are expected to publish without a proof and making corrections to the PDF is not an option.

Although we all strive for perfection, we know that mistakes occur and that changes need to be made, before and sometimes after publication. When this happens, having a versioning process that readers can follow is reassuring.

Authors who publish with ScienceOpen can:

  • Submit any article type: research, reviews, opinions, clinical case reports, protocols, posters etc.
  • Submit from any discipline: all sciences, medicine, humanities and social sciences
  • Submit manuscripts posted at preprint servers such as BioRxiv and arXiv
  • Use a private pre-publication workspace to develop their manuscript with co-authors
  • Get a yes/no decision on their submission within about a week after an internal check
  • Have their original manuscript published as a Preview with DOI
  • Receive proofs
  • Get copy-editing and language help when necessary as a courtesy benefit (please note, this is not a translation service!)
  • Make as many proof corrections as they wish
  • Sign off on final proofs
  • Have their Preview article replaced with a Final article in PDF, HTML and XML formats 
  • Experience fully transparent post-publication peer-review
  • Can respond to reviewer feedback by publishing a revised version, with either minor or major changes (Version 1 is the original publication, 2/3 can be minor or major in any combination and are included in the Publication Fee)

Each version has a different DOI that is semantically linked to the DOI of the original version for easy tracking. Versions are clearly visible online, the latest are presented first with prominent links to previous versions. We maintain & display information about which version of an article the reviews and comments refer to, this allows readers to follow a link to an earlier version of the content to see the article history.

ScienceOpen strives to offer services to researchers that are the best they can be, for a price ($800) that is significantly less than most OA journals. We welcome you to register today (takes about a minute) and consider publishing your next OA article with us.

In:  Licenses  

ScienceOpen joins the Coalition against STM Licenses

Image credit: keepcalm-o-matic
Image credit: keepcalm-o-matic

The International Association of Scientific, Technical & Medical Publishers (STM) has revealed some new “model” licenses which are ironically entitled “making OA licensing work”. We’re sorry, did we miss something?  Continue reading “ScienceOpen joins the Coalition against STM Licenses”  

In:  Impact Factor  

Article vs Journal Impact – Perspective from PLOS ONE Editorial Director Damian Pattinson

The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system.  Credit: Stuart Rankin, Flickr, CC-BY-NC
The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system. Credit: Stuart Rankin, Flickr, CC-BY-NC

Earlier this summer, I skyped with Damian Pattinson, the Editorial Director of PLOS ONE, about the Impact Factor , its widespread misuse and how, thankfully, altmetrics now offer a better way forward.

Q. The PLOS ONE Impact Factor has decreased for a few years in a row. Is this to be expected given its ranking as the world’s largest journal and remit to publish all good science regardless of impact?

A. I don’t think the Impact Factor is a very good measure of anything, but clearly it is particularly meaningless for a journal that deliberately eschews evaluation of impact in its publications decisions. Our founding principle was that impact should be evaluated post-publication. In terms of the average number of citations per article, my sense is that this is changing due to the expanding breadth of fields covered by PLOS ONE, not to mention its sheer size (we recently published our 100,000th article). When you grow as quickly as we have, your annual average citation rate will always be suppressed by the fact that you are publishing far more papers at the end of the year than at the beginning.

Q. Articles at PLOS ONE undoubtedly vary in terms of the number of citations they accrue. Some are higher, some lower. Is there an observable pattern to this trend overall that is not reflected by a simple read of the Impact Factor?

A. Differences in the average number of citations are, to a large extent, subject specific and therefore a reflection on the size of a particular research community. Smaller fields simply produce fewer scientific papers so statistically it is less likely that even a highly-cited paper will have as many citations as one published in a larger research field. Such a subject-specific examination may also reveal different patterns if one looks at metrics besides citation. That is something we are very interested in exploring with Article-Level Metrics (ALM).

Q. Has the reduction of PLOS ONE’s Impact Factor influenced its submission volume or is that holding up relatively well?

A. Actually, the effective submission volume is still increasing even though the rate of growth has slowed. Year-on-year doubling in perpetuity is not realistic in any arena. We have seen a drop in the number of publications, however, due to a number of factors. Most notably we have seen an increase in the rejection rate as we continue to ensure that the research published in PLOS ONE is of the highest standard. We put all our papers through rigorous checks at submission, including ethical oversight, data availability, adherence to reporting guidelines, and so more papers are rejected before being sent for review.  We have also found an increase of submissions better suited for other dissemination channels, and have worked with authors to pursue them. But to your point, I do not think that last year’s changing IF directly affected PLOS ONE submission volume.

Q. Stepping back for a moment, it really is extraordinary that this arguably flawed mathematical equation, first mentioned by Dr Eugene Garfield in 1955, is still so influential. Garfield said “The impact factor is a very useful tool for evaluation of journals, but it must be used discreetly”.

It seems that the use of the IF is far from discreet since it is a prime marketing tool for many organizations, although not at PLOS which doesn’t list the IF on any of its websites (kudos). But seriously, do you agree with Garfield’s statement that the IF has any merit in journal evaluation, or that evaluating journals at all in the digital age has any merit?

A. Any journal level metric is going to be problematic as “journals” continue to evolve in a digital environment. But the IF is particularly questionable as a tool to measure the “average” citation rates of a journal because the distribution is hardly ever normal – in most journals a few highly cited papers contribute to most of the IF while a great number of papers are hardly cited at all. The San Francisco Declaration on Research Assessment (DORA) is a great first step in moving away from using journal metrics to measure things they were never intended to measure and I recommend everyone to sign it.

Q. What are the main ways that the IF is misused, in your opinion?

A. The level to which the IF has become entrenched in the scientific community is amazing. Grants, tenure, hiring at nearly every level depend to the IF of the journals in which a researcher publishes his or her results. Nearly everyone realizes that it is not a good way to measure quality or productivity, but use it anyway. Actually it’s more complicated than that – everyone uses it because they think that everyone else cares about it! So academics believe that their institutions use it to decide tenure, even when the institutions have committed not to; institutions think that the funders care about it despite commitments to the contrary.  In some way the community itself needs to reflect on this and make some changes. The IF creates perverse incentives for the entire research community, including publishers. Of course journals try to improve their score, often in ways that is damaging to the research community. Because of how the IF is calculated, it makes sense to publish high impact papers in January so that they collect citations for the full 12 months. Some journals hold back the best papers for months to increase the IF – which is bad for both the researchers as well as the whole of science. Journals also choose to publish papers that may be less useful to researchers simply because they are more highly cited. So they will choose to publish (often unnecessary) review articles, while refusing to publish negative results or case reports, which will be cited less often (despite offering more useful information).

Q. Could you imagine another metric which would better measure the output of journals like PLOS ONE?

A. Of course you are right, for journals that cover a broad range of disciplines or for interdisciplinary journals, the Impact Factor is even less useful because of the subject-specific statistics we spoke of earlier. There have been a number of newcomers such as ScienceOpen, PeerJ and F1000Research with a very broad scope – as these and other new platforms come into the publishing mainstream, we may find new metrics to distinguish strengths and weaknesses. Certainly the Impact Factor is not the best mechanism for journal quality and, even less so, researcher quality.

Q. How do you feel about ScienceOpen Advisory Board Member Peter Suber’s statement in a recent ScienceOpen interview that the IF is “an impact metric used as a quality metric, but it doesn’t measure impact well and doesn’t measure quality at all.”

A. How often a paper is cited in the scholarly literature is an important metric. But citations are a blunt tool at best to measure research quality. We do not know anything about the reason a paper was cited – it could be in fact to refute a point or as an example of incorrect methodology. If we only focus on citations, we are missing a more interesting and powerful story. With ALMs that also measure downloads, social media usage, recommendations, and more, we find huge variations in usage. In fields beyond basic research such as clinical medicine or applied technology fields which have implications for the broader population, a paper may have a big political impact, even though it is not highly cited. ALMs are really starting to show us the different ways different articles are received. At the moment we do not have a good measure of quality, but we believe reproducibility of robust results are key.

At PLOS we have been at the forefront of this issue for many years, and are continuing to innovate to find better ways of measuring and improving reproducibility of the literature. With our current focus on “impact” we are disproportionately rewarding the “biggest story” which may have an inverse relationship to reproducibility and quality.

Q. PLOS has a leadership role within the Altmetrics community. To again quote ScienceOpen Advisory Board Member Peter Suber on the current state of play: “Smart people are competing to give us the most nuanced or sensitive picture of research impact they can. We all benefit from that.”

Did PLOS predict the level to which the field has taken off and the amount of competition within it or is the organization pleasantly surprised?

A. The need was clearly there and only increasing over time. When we began our Article-Level Metrics (ALM) work in 2009, we envisioned a better system for all scholars. This is certainly not something specific to Open Access.

Since then, the particular details of how we might better serve science continue to evolve, especially now that the entire community has begun to actively engage with these issues together. It’s great that there is increasing awareness that the expanding suite of article activity metrics cannot fully come of age until data are made freely available for all scholarly literature and widely adopted. Only then can we better understand what the numbers truly mean in order to appropriately apply them. We anticipate that open availability of data will usher in an entire vibrant sector of technology providers that each add value to the metrics in novel ways. We are seeing very promising developments in this direction already.

Q. What’s next for PLOS ALM in terms of new features and developments?

A. Our current efforts are primarily focused on developing the ALM application to serve the needs not only of single publishers but of the entire research ecosystem. We are thrilled too that the community is increasingly participating in this vision, as the application grows into a bona fide open source community project with collaborators across the publishing sector, including CrossRef. On the home front, the application is essentially an open framework that can capture activity on scholarly objects beyond the article, and we’ll be exploring this further with research datasets. Furthermore, we will be overhauling the full display of ALM on the article page metrics tab with visualizations that tell the story of article activity across time and across ALM sources.  We will also release a set of enhancements to ALM Reports so that it better supports the wide breadth of reporting needs for researchers, funders, and institutions.

ScienceOpen Author Interview Series – Daniel Graziotin

ScienceOpen Author Interview Series – Daniel Graziotin

Today’s interview comes from another recent ScienceOpen author, Daniel Graziotin. In his ScienceOpen article, “Green open access in computer science – an exploratory study on author-based self-archiving awareness, practice, and inhibitors,” he analyses the results of an exploratory questionnaire given to Computer Scientists. It addressed issues around various forms of academic publishing, self-archiving of research, and copyright. In the following interview with ScienceOpen, Graziotin gives a unique perspective, as a young scientist and as a software and web developer coming to scientific publishing from the world of open software development. His ideas are bound to be interesting to emerging scientists in particular, as he represents a new globally engaged generation of scientific researchers making full use of open knowledge to further innovation. Continue reading “ScienceOpen Author Interview Series – Daniel Graziotin”  

“All research should be OA”. We agree!

Today’s interview comes from Dr. Janis Vogt, a PhD in Biochemistry and a member of the Thomas J. Jentsch Research Group in the Department of Physiology and Pathology of Ion Transport at the Leibniz-Institut für Molekulare Pharmakologie in Berlin, Germany. Continue reading ““All research should be OA”. We agree!”  

ScienceOpen Editorial Board Bernd Hartke, Christian-Albrechts-University, Kiel, Germany

Today’s interview comes from ScienceOpen Editorial Board member Professor Bernd Hartke, who gives us the benefit of his experience and his insights on Open Access publishing versus the traditional publication model. We are pleased to have scientists like Professor Hartke on the Science Open Editorial Board, who are willing to Continue reading “ScienceOpen Editorial Board Bernd Hartke, Christian-Albrechts-University, Kiel, Germany”  

In:  Announcements  

Review any of 1.3million+ OA articles on the ScienceOpen platform – receive a DOI so your contribution can be found and cited

Image credit: International Health Academy
Image credit: International Health Academy

At ScienceOpen, the research + publishing network, we’re enjoying some of the upsides of being the new kid on the Open Access (OA) block. Innovation and building on the experiments of others is easier when there’s less to lose but we are also the first to admit that life as a start-up is not for the faint hearted!

In the years since user generated comments and reviews were first introduced, those of us who strive to improve research communication have wrestled with questions such as: potential for career damage; content for peer and public audiences; comments from experts, everyone or a mix and lower than anticipated participation.

We want to acknowledge the many organizations who have done a tremendous job at showing different paths forward in this challenging space. Now it’s our turn to try.

Since launch, ScienceOpen has assigned members different user privileges based on their previous publishing history as verified by their ORCID ID. This seemed like a reasonable way to measure involvement in the field and provided the right level of publishing experience to understand the pitfalls of the process. This neat diagram encapsulates how it works.

Scientific and Expert Members of ScienceOpen can review all the content on the site which includes 1.3million+ OA articles and a very small number of our own articles (did we mention, we’re new!).

All reviews require a four point assessment (using five stars) of the level of: importance, validity, completeness and comprehensibility and there’s space to introduce and summarize the material. Inline annotation captures reviewer feedback during reading. Next up in the site release cycle, mechanisms to make it easy for authors to respond to in-line observations.

In a move sure to please busy researchers tired of participating without recognition, each review, including the subsequent dialogue, receives a Digital Object Identified (DOI) so that others can find and cite the analysis and the contribution becomes a registered part of the scientific debate.

Welcome to our wonderful world of Reviewing! Please share your feedback here or @Science_Open.

 

 

 

 

 

 

 

 

 

 

 

 

 

In:  Other  

ScienceOpen Interview with David Black, Secretary General, International Council for Science.

David Black is Secretary General of the International Council for Science (ICSU) and Professor of Organic Chemistry at the University of New South Wales, Australia. An advocate of Open Access for scientific data in his role at ICSU, Professor Black is a proponent of the initiatives of ICSU and ICSU-affiliate groups, such as the Committee on Freedom and Responsibility in the Conduct of Science (CFRS), the ICSU-World Data System (ICSU-WDS), the International Council for Scientific and Technical Information (ICSTI), the ICSU’s Strategic Coordinating Committee on Information and Data (SCCID), Continue reading “ScienceOpen Interview with David Black, Secretary General, International Council for Science.”  

ScienceOpen Editorial Board: Robson Santos, Federal University of Minas Gerais, Brazil

Today, we’re featuring a video clip by one of our newest Editorial Board members, Robson Santos:

 

Professor Santos is Full Professor at the Federal University of Minas Gerais, Brazil at the Institute of Biological Sciences. He is secretary of the Inter-American Society of Hypertension and president of the Brazilian Society of Hypertension Scientific Committee. He has been coordinator of the Laboratory of Hypertension at the Federnal University of Minas Gerais since 1985, has supervised more than 30 MSc and more than 25 doctoral students, and published over 150 articles in international publications.  Continue reading “ScienceOpen Editorial Board: Robson Santos, Federal University of Minas Gerais, Brazil”