In:  Impact Factor  

Article vs Journal Impact – Perspective from PLOS ONE Editorial Director Damian Pattinson

The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system.  Credit: Stuart Rankin, Flickr, CC-BY-NC
The Hellas Impact Basin on Mars (edited topographical map), which may be the largest crater in the solar system. Credit: Stuart Rankin, Flickr, CC-BY-NC

Earlier this summer, I skyped with Damian Pattinson, the Editorial Director of PLOS ONE, about the Impact Factor , its widespread misuse and how, thankfully, altmetrics now offer a better way forward.

Q. The PLOS ONE Impact Factor has decreased for a few years in a row. Is this to be expected given its ranking as the world’s largest journal and remit to publish all good science regardless of impact?

A. I don’t think the Impact Factor is a very good measure of anything, but clearly it is particularly meaningless for a journal that deliberately eschews evaluation of impact in its publications decisions. Our founding principle was that impact should be evaluated post-publication. In terms of the average number of citations per article, my sense is that this is changing due to the expanding breadth of fields covered by PLOS ONE, not to mention its sheer size (we recently published our 100,000th article). When you grow as quickly as we have, your annual average citation rate will always be suppressed by the fact that you are publishing far more papers at the end of the year than at the beginning.

Q. Articles at PLOS ONE undoubtedly vary in terms of the number of citations they accrue. Some are higher, some lower. Is there an observable pattern to this trend overall that is not reflected by a simple read of the Impact Factor?

A. Differences in the average number of citations are, to a large extent, subject specific and therefore a reflection on the size of a particular research community. Smaller fields simply produce fewer scientific papers so statistically it is less likely that even a highly-cited paper will have as many citations as one published in a larger research field. Such a subject-specific examination may also reveal different patterns if one looks at metrics besides citation. That is something we are very interested in exploring with Article-Level Metrics (ALM).

Q. Has the reduction of PLOS ONE’s Impact Factor influenced its submission volume or is that holding up relatively well?

A. Actually, the effective submission volume is still increasing even though the rate of growth has slowed. Year-on-year doubling in perpetuity is not realistic in any arena. We have seen a drop in the number of publications, however, due to a number of factors. Most notably we have seen an increase in the rejection rate as we continue to ensure that the research published in PLOS ONE is of the highest standard. We put all our papers through rigorous checks at submission, including ethical oversight, data availability, adherence to reporting guidelines, and so more papers are rejected before being sent for review.  We have also found an increase of submissions better suited for other dissemination channels, and have worked with authors to pursue them. But to your point, I do not think that last year’s changing IF directly affected PLOS ONE submission volume.

Q. Stepping back for a moment, it really is extraordinary that this arguably flawed mathematical equation, first mentioned by Dr Eugene Garfield in 1955, is still so influential. Garfield said “The impact factor is a very useful tool for evaluation of journals, but it must be used discreetly”.

It seems that the use of the IF is far from discreet since it is a prime marketing tool for many organizations, although not at PLOS which doesn’t list the IF on any of its websites (kudos). But seriously, do you agree with Garfield’s statement that the IF has any merit in journal evaluation, or that evaluating journals at all in the digital age has any merit?

A. Any journal level metric is going to be problematic as “journals” continue to evolve in a digital environment. But the IF is particularly questionable as a tool to measure the “average” citation rates of a journal because the distribution is hardly ever normal – in most journals a few highly cited papers contribute to most of the IF while a great number of papers are hardly cited at all. The San Francisco Declaration on Research Assessment (DORA) is a great first step in moving away from using journal metrics to measure things they were never intended to measure and I recommend everyone to sign it.

Q. What are the main ways that the IF is misused, in your opinion?

A. The level to which the IF has become entrenched in the scientific community is amazing. Grants, tenure, hiring at nearly every level depend to the IF of the journals in which a researcher publishes his or her results. Nearly everyone realizes that it is not a good way to measure quality or productivity, but use it anyway. Actually it’s more complicated than that – everyone uses it because they think that everyone else cares about it! So academics believe that their institutions use it to decide tenure, even when the institutions have committed not to; institutions think that the funders care about it despite commitments to the contrary.  In some way the community itself needs to reflect on this and make some changes. The IF creates perverse incentives for the entire research community, including publishers. Of course journals try to improve their score, often in ways that is damaging to the research community. Because of how the IF is calculated, it makes sense to publish high impact papers in January so that they collect citations for the full 12 months. Some journals hold back the best papers for months to increase the IF – which is bad for both the researchers as well as the whole of science. Journals also choose to publish papers that may be less useful to researchers simply because they are more highly cited. So they will choose to publish (often unnecessary) review articles, while refusing to publish negative results or case reports, which will be cited less often (despite offering more useful information).

Q. Could you imagine another metric which would better measure the output of journals like PLOS ONE?

A. Of course you are right, for journals that cover a broad range of disciplines or for interdisciplinary journals, the Impact Factor is even less useful because of the subject-specific statistics we spoke of earlier. There have been a number of newcomers such as ScienceOpen, PeerJ and F1000Research with a very broad scope – as these and other new platforms come into the publishing mainstream, we may find new metrics to distinguish strengths and weaknesses. Certainly the Impact Factor is not the best mechanism for journal quality and, even less so, researcher quality.

Q. How do you feel about ScienceOpen Advisory Board Member Peter Suber’s statement in a recent ScienceOpen interview that the IF is “an impact metric used as a quality metric, but it doesn’t measure impact well and doesn’t measure quality at all.”

A. How often a paper is cited in the scholarly literature is an important metric. But citations are a blunt tool at best to measure research quality. We do not know anything about the reason a paper was cited – it could be in fact to refute a point or as an example of incorrect methodology. If we only focus on citations, we are missing a more interesting and powerful story. With ALMs that also measure downloads, social media usage, recommendations, and more, we find huge variations in usage. In fields beyond basic research such as clinical medicine or applied technology fields which have implications for the broader population, a paper may have a big political impact, even though it is not highly cited. ALMs are really starting to show us the different ways different articles are received. At the moment we do not have a good measure of quality, but we believe reproducibility of robust results are key.

At PLOS we have been at the forefront of this issue for many years, and are continuing to innovate to find better ways of measuring and improving reproducibility of the literature. With our current focus on “impact” we are disproportionately rewarding the “biggest story” which may have an inverse relationship to reproducibility and quality.

Q. PLOS has a leadership role within the Altmetrics community. To again quote ScienceOpen Advisory Board Member Peter Suber on the current state of play: “Smart people are competing to give us the most nuanced or sensitive picture of research impact they can. We all benefit from that.”

Did PLOS predict the level to which the field has taken off and the amount of competition within it or is the organization pleasantly surprised?

A. The need was clearly there and only increasing over time. When we began our Article-Level Metrics (ALM) work in 2009, we envisioned a better system for all scholars. This is certainly not something specific to Open Access.

Since then, the particular details of how we might better serve science continue to evolve, especially now that the entire community has begun to actively engage with these issues together. It’s great that there is increasing awareness that the expanding suite of article activity metrics cannot fully come of age until data are made freely available for all scholarly literature and widely adopted. Only then can we better understand what the numbers truly mean in order to appropriately apply them. We anticipate that open availability of data will usher in an entire vibrant sector of technology providers that each add value to the metrics in novel ways. We are seeing very promising developments in this direction already.

Q. What’s next for PLOS ALM in terms of new features and developments?

A. Our current efforts are primarily focused on developing the ALM application to serve the needs not only of single publishers but of the entire research ecosystem. We are thrilled too that the community is increasingly participating in this vision, as the application grows into a bona fide open source community project with collaborators across the publishing sector, including CrossRef. On the home front, the application is essentially an open framework that can capture activity on scholarly objects beyond the article, and we’ll be exploring this further with research datasets. Furthermore, we will be overhauling the full display of ALM on the article page metrics tab with visualizations that tell the story of article activity across time and across ALM sources.  We will also release a set of enhancements to ALM Reports so that it better supports the wide breadth of reporting needs for researchers, funders, and institutions.