Blog
About

Category: Impact Factor

Diverse Approaches to Peer Review

Portrait of Albert Einstein in a museum. Source: pixabay.com

Peer Review Week, Sep 10-15, 2018

Peer Review Week is a global event celebrating the role of peer review in maintaining scientific quality. This year marks the event’s fourth anniversary of bringing together researchers, institutions, and organizations committed to the message that good peer review is crucial to scholarly communications. This year Peer Review Week on the topic of diversity aims:

  • To emphasize the central role peer review plays in scholarly communication
  • To showcase the work of editors and reviewers
  • To share research and advance best practices
  • To highlight the latest innovation and applications.
    (Source: https://peerreviewweek.wordpress.com/)

Although peer review itself is not as young as the week-long event organized in its celebration, it is still a relatively new invention. Albert Einstein published his original papers in non-peer-reviewed German journals through 1933, most famously in the Annalen der Physik. Max Planck, one of the journal’s editors of the time, described his editorial philosophy as:

To shun much more the reproach of having suppressed strange opinions than that of having been too gentle in evaluating them.

After moving to the US, Einstein was so shocked that his paper submitted to the Physical Review in 1936 was met with negative criticism that he decided not to publish with them at all. Ironically, the paper in question hypothesized that gravitational waves do not exist. In retrospect, peer review saved Einstein the controversy and the embarrassment that would have ensued if he had published his original article.

Einstein’s anecdotal experience with non-/peer review journals both points to the necessity of peer review in quality scholarly publishing and to the danger of excluding scientific arguments from the academic narrative. ScienceOpen bridges the gap between these two opposite approaches by making both preprints and peer-reviewed scholarly articles accessible through its discovery environment with a unified review framework for researchers to evaluate results.

The “preprint” enables researchers to openly share their results with peers at an early stage and still publish the peer-reviewed final version of their findings in a journal of their choice. To help the researcher find preprints or concentrate only on peer-reviewed literature, searches on ScienceOpen can be filtered to view only preprints or may exclude preprints. We are currently tracking preprints from arXiv, bioRxiv, PeerJ, Preprints.org, ChemRxiv, and the Open Science Framework repositories.

Once a preprint has been published, ScienceOpen offers a full set of tools to peer-review and curate the content. Users can organize and manage the review entirely on their own. Found an interesting preprint, but want an expert opinion before using it in your research? Invite a reviewer! Researchers can either review an article themselves or invite an expert colleague to do so with one click of a button on every article page. Reviewers currently need a minimum of 5 records attached to their ORCID. ScienceOpen encourages everyone to openly participate in this process, thereby contributing to the diversification of expert opinions on a specific topic.

The fact that a paper has been published, and therefore peer-reviewed, does not mean that the research should stop. ScienceOpen enables post-publication peer review across 45 million article records, in the form of final-version comments. Article reviews, modeled after book reviews, are published with the author’s name and should provide orientation and an evaluation of the research for readers. Peer review as an open dialogue between experts actively contextualizes the research into ongoing scientific debates and helps researchers gain a deeper insight into a specific topic.

In order to fully recognize the contribution of reviewers and ensure maximal discoverability for authors, ScienceOpen integrates seamlessly with Crossref and ORCID. ScienceOpen has linked users with ORCID from the beginning. Recently, ScienceOpen has been actively participating in Crossref’s development of peer review content registration. In their recent press release, ‘Crossref facilitates the use of essential peer review information for scholarly communications‘, Crossref emphasized the importance of persistent records for peer review and commended ScienceOpen on successfully implementing metadata that enriches “scholarly discussion, reviewer accountability, transparency, and peer review analysis”. Stephanie Dawson, CEO of ScienceOpen, added that rich metadata is key to discoverability – for research articles, preprints, books, conference proceedings, and now for peer review reports. Crossref is making these reviews easier to identify and find, which translates into “more impact for researchers and publishers”. Anyone can retrieve the data necessary for their integration and analysis. As the Crossref press release concludes, rich metadata helps institutions and researchers build a better picture around the role of peer review in scholarly communications as a whole, not only in terms of identifying and assessing their own contributions.

Peer review is necessary to ensure quality scientific publishing, but it still needs to be honed to the greater benefit of the researcher, the scientific community, and ultimately the whole society. ScienceOpen contributes to this goal by integrating rich metadata, featuring preprints, and enabling post publication peer review. We look forward to hearing additional potential solutions to the diversification of the peer review process for a greater impact during #PeerReviewWeek18!

Why ScienceOpen Research doesn’t have an impact factor

ScienceOpen is more than just a publisher – we’re an open science platform!

We publish from across the whole spectrum of research: Science, Technology, Engineering, Humanities, Mathematics, Social Sciences. Every piece of research deserves an equal chance to be published, irrespective of its field.

We also don’t discriminate based on the type of research.  Original research, small-scale studies, opinion pieces, “negative” or null findings, review articles, data and software articles, case reports, and replication studies. We publish it all.

At ScienceOpen, we believe that the Journal Impact Factor (JIF) is a particularly poor way of measuring the impact of scholarly publishing. Furthermore, we think that it is a highly misleading metric for research assessment despite its widespread [mis-]use for this, and we strongly encourage researchers to adhere to the principles of DORA and the Leiden Manifesto.

csu07dewcaabjyn

This is why for our primary publication, ScienceOpen Research, we do not obtain or report the JIF. We provide article-level metrics and a range of other article aspects that provide and enhance the context of each article, and extend this to all 25 million research articles on our platform.

Further reading

A simple proposal for the publication of journal citation distributions (link)

How can academia kick its addiction to the impact factor (link)

In:  Impact Factor  

How can academia kick its addiction to the impact factor?

The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).

Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.

As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.

The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.

Oh, I didn’t realise it was that simple. Problem solved.

Eventually, we agreed on the point that finding a universal solution to impact factor mis-use is difficult. If it were so easy, there’d be start-ups stepping in to capitalise on it!

(Note: these are just smaller snippets from a larger conversation)

What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.

So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.

Continue reading “How can academia kick its addiction to the impact factor?”  

In:  Impact Factor  

Article vs Journal Impact – Perspective from PLOS ONE Editorial Director Damian Pattinson

The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system.  Credit: Stuart Rankin, Flickr, CC-BY-NC
The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system. Credit: Stuart Rankin, Flickr, CC-BY-NC

Earlier this summer, I skyped with Damian Pattinson, the Editorial Director of PLOS ONE, about the Impact Factor , its widespread misuse and how, thankfully, altmetrics now offer a better way forward.

Q. The PLOS ONE Impact Factor has decreased for a few years in a row. Is this to be expected given its ranking as the world’s largest journal and remit to publish all good science regardless of impact?

A. I don’t think the Impact Factor is a very good measure of anything, but clearly it is particularly meaningless for a journal that deliberately eschews evaluation of impact in its publications decisions. Our founding principle was that impact should be evaluated post-publication. In terms of the average number of citations per article, my sense is that this is changing due to the expanding breadth of fields covered by PLOS ONE, not to mention its sheer size (we recently published our 100,000th article). When you grow as quickly as we have, your annual average citation rate will always be suppressed by the fact that you are publishing far more papers at the end of the year than at the beginning.

Q. Articles at PLOS ONE undoubtedly vary in terms of the number of citations they accrue. Some are higher, some lower. Is there an observable pattern to this trend overall that is not reflected by a simple read of the Impact Factor?

A. Differences in the average number of citations are, to a large extent, subject specific and therefore a reflection on the size of a particular research community. Smaller fields simply produce fewer scientific papers so statistically it is less likely that even a highly-cited paper will have as many citations as one published in a larger research field. Such a subject-specific examination may also reveal different patterns if one looks at metrics besides citation. That is something we are very interested in exploring with Article-Level Metrics (ALM).

Q. Has the reduction of PLOS ONE’s Impact Factor influenced its submission volume or is that holding up relatively well?

A. Actually, the effective submission volume is still increasing even though the rate of growth has slowed. Year-on-year doubling in perpetuity is not realistic in any arena. We have seen a drop in the number of publications, however, due to a number of factors. Most notably we have seen an increase in the rejection rate as we continue to ensure that the research published in PLOS ONE is of the highest standard. We put all our papers through rigorous checks at submission, including ethical oversight, data availability, adherence to reporting guidelines, and so more papers are rejected before being sent for review.  We have also found an increase of submissions better suited for other dissemination channels, and have worked with authors to pursue them. But to your point, I do not think that last year’s changing IF directly affected PLOS ONE submission volume.

Q. Stepping back for a moment, it really is extraordinary that this arguably flawed mathematical equation, first mentioned by Dr Eugene Garfield in 1955, is still so influential. Garfield said “The impact factor is a very useful tool for evaluation of journals, but it must be used discreetly”.

It seems that the use of the IF is far from discreet since it is a prime marketing tool for many organizations, although not at PLOS which doesn’t list the IF on any of its websites (kudos). But seriously, do you agree with Garfield’s statement that the IF has any merit in journal evaluation, or that evaluating journals at all in the digital age has any merit?

A. Any journal level metric is going to be problematic as “journals” continue to evolve in a digital environment. But the IF is particularly questionable as a tool to measure the “average” citation rates of a journal because the distribution is hardly ever normal – in most journals a few highly cited papers contribute to most of the IF while a great number of papers are hardly cited at all. The San Francisco Declaration on Research Assessment (DORA) is a great first step in moving away from using journal metrics to measure things they were never intended to measure and I recommend everyone to sign it.

Q. What are the main ways that the IF is misused, in your opinion?

A. The level to which the IF has become entrenched in the scientific community is amazing. Grants, tenure, hiring at nearly every level depend to the IF of the journals in which a researcher publishes his or her results. Nearly everyone realizes that it is not a good way to measure quality or productivity, but use it anyway. Actually it’s more complicated than that – everyone uses it because they think that everyone else cares about it! So academics believe that their institutions use it to decide tenure, even when the institutions have committed not to; institutions think that the funders care about it despite commitments to the contrary.  In some way the community itself needs to reflect on this and make some changes. The IF creates perverse incentives for the entire research community, including publishers. Of course journals try to improve their score, often in ways that is damaging to the research community. Because of how the IF is calculated, it makes sense to publish high impact papers in January so that they collect citations for the full 12 months. Some journals hold back the best papers for months to increase the IF – which is bad for both the researchers as well as the whole of science. Journals also choose to publish papers that may be less useful to researchers simply because they are more highly cited. So they will choose to publish (often unnecessary) review articles, while refusing to publish negative results or case reports, which will be cited less often (despite offering more useful information).

Q. Could you imagine another metric which would better measure the output of journals like PLOS ONE?

A. Of course you are right, for journals that cover a broad range of disciplines or for interdisciplinary journals, the Impact Factor is even less useful because of the subject-specific statistics we spoke of earlier. There have been a number of newcomers such as ScienceOpen, PeerJ and F1000Research with a very broad scope – as these and other new platforms come into the publishing mainstream, we may find new metrics to distinguish strengths and weaknesses. Certainly the Impact Factor is not the best mechanism for journal quality and, even less so, researcher quality.

Q. How do you feel about ScienceOpen Advisory Board Member Peter Suber’s statement in a recent ScienceOpen interview that the IF is “an impact metric used as a quality metric, but it doesn’t measure impact well and doesn’t measure quality at all.”

A. How often a paper is cited in the scholarly literature is an important metric. But citations are a blunt tool at best to measure research quality. We do not know anything about the reason a paper was cited – it could be in fact to refute a point or as an example of incorrect methodology. If we only focus on citations, we are missing a more interesting and powerful story. With ALMs that also measure downloads, social media usage, recommendations, and more, we find huge variations in usage. In fields beyond basic research such as clinical medicine or applied technology fields which have implications for the broader population, a paper may have a big political impact, even though it is not highly cited. ALMs are really starting to show us the different ways different articles are received. At the moment we do not have a good measure of quality, but we believe reproducibility of robust results are key.

At PLOS we have been at the forefront of this issue for many years, and are continuing to innovate to find better ways of measuring and improving reproducibility of the literature. With our current focus on “impact” we are disproportionately rewarding the “biggest story” which may have an inverse relationship to reproducibility and quality.

Q. PLOS has a leadership role within the Altmetrics community. To again quote ScienceOpen Advisory Board Member Peter Suber on the current state of play: “Smart people are competing to give us the most nuanced or sensitive picture of research impact they can. We all benefit from that.”

Did PLOS predict the level to which the field has taken off and the amount of competition within it or is the organization pleasantly surprised?

A. The need was clearly there and only increasing over time. When we began our Article-Level Metrics (ALM) work in 2009, we envisioned a better system for all scholars. This is certainly not something specific to Open Access.

Since then, the particular details of how we might better serve science continue to evolve, especially now that the entire community has begun to actively engage with these issues together. It’s great that there is increasing awareness that the expanding suite of article activity metrics cannot fully come of age until data are made freely available for all scholarly literature and widely adopted. Only then can we better understand what the numbers truly mean in order to appropriately apply them. We anticipate that open availability of data will usher in an entire vibrant sector of technology providers that each add value to the metrics in novel ways. We are seeing very promising developments in this direction already.

Q. What’s next for PLOS ALM in terms of new features and developments?

A. Our current efforts are primarily focused on developing the ALM application to serve the needs not only of single publishers but of the entire research ecosystem. We are thrilled too that the community is increasingly participating in this vision, as the application grows into a bona fide open source community project with collaborators across the publishing sector, including CrossRef. On the home front, the application is essentially an open framework that can capture activity on scholarly objects beyond the article, and we’ll be exploring this further with research datasets. Furthermore, we will be overhauling the full display of ALM on the article page metrics tab with visualizations that tell the story of article activity across time and across ALM sources.  We will also release a set of enhancements to ALM Reports so that it better supports the wide breadth of reporting needs for researchers, funders, and institutions.

In:  Impact Factor  

Journal Impact Factors – Time to say goodbye?

Along with over 10,000 others, I signed the San Francisco Declaration on Research Assessment DORA ( http://www.ascb.org/dora ). Why? I believe that the impact factor was a useful tool for the paper age, but that we now have the capability to develop much more powerful tools to evaluate research. For hundreds of years scientific discourse took place on paper – letters written and sent with the post, research cited in one’s own articles printed and distributed by publishers. Citation was the most direct way in many cases to respond directly to the research of another scientist. In the 1970s as Continue reading “Journal Impact Factors – Time to say goodbye?”