Blog
About

Open Science Stars: An interview with Daniel Shanahan

Last week, we kicked off a series interviewing some of the top ‘open ​scientists’ by interviewing Dr. Joanne Kamens of Addgene, and had a look at some of the great work she’d been doing in promoting a culture of data sharing, and equal opportunity for researchers. Today, we’ve got something completely different, with Daniel Shanahan of BioMed Central who recently published a really cool PeerJ paper on auto-correlation and the impact factor.

Hi Daniel! To start things off, can you tell us a bit about your background?

I completed a Master’s degree in Experimental and Theoretical Physics at University of Cambridge, but must admit I did my Master’s more to have an extra year to play rugby for the university, rather than a love of micro-colloidal particles and electron lasers. I have always loved science though and found my way into STM publishing, albeit from a slightly less than traditional route.

What’s it like to work at ‘the original’ open access publisher?

I love it! I am a huge supporter of open science and transparency, and BioMed Central had been a real game changer in so many ways, obviously with open access and open peer review, but also with many other issues around transparency, including publication of null and inconclusive results, pre-registration, publication of protocols and so on. The company is still determined to make science better and everyone who works here is incredibly passionate, so it is sometimes easy to forget that many people – possibly even the majority – still view transparency and open science with suspicion.

Gulliver Turtle! (Source)
Gulliver Turtle! (Source)

The impact factor and the fact that it is so ingrained in our collective consciousness is a sign that we care less about the science and more about the headline – citations don’t measure quality, they measure activity, which are not the same thing.

You recently published a paper with PeerJ about the Journal Impact Factor. What was your motivation behind this study?

I have been arguing that people selectively cite articles in high impact factor journals for years, not because the article is objectively better, or even more relevant to what they are doing, but simply because it is from a high impact factor journal. From my experience, researchers seem willing to take as given that articles in high impact factor journals are ‘good’, while other journals have to prove themselves. This is especially frustrating when, in some cases, the lower impact factor journals may in fact be objectively ‘better’ in some ways than the higher impact factor journals. This also goes a long way to explaining the impact factor inflation that’s been seen, particularly in biomedicine. I simply wanted to demonstrate that this is what is happening – there have been a number of studies already linking citations to authors, publishing license and so on, but I wanted to show that, all else being equal, the article in the higher impact factor journal would be more cited, highlighting the absurdity of using it as a quality measure.

The idea for the study actually came as one of those ‘I wish I’d said that moments’, while I was waiting for my plane home and completing a Delphi study for an EQUATOR reporting guideline. I had just been presenting at a conference, and had got into a discussion (read ‘argument’) about this exact point, and realised that as the guidelines were often co-published across multiple journals that would provide me with a cohort to look into this and see if I was right.

From my experience, researchers seem willing to take as given that articles in high impact factor journals are ‘good’, while other journals have to prove themselves.

The key finding of your study seemed to be that citations and impact factors are strongly correlated. What does this association imply?

Essentially it shows that researchers are unable to disentangle the article itself from the journal its published in, and still believe that the journal impact factor is the final word regarding the quality of an article. If you want your article to be highly cited, regardless of whether it is any good or not, you are better off publishing in a high impact factor journal (although I doubt this comes as news to anyone). It also means that the impact factors for high IF journals are going to keep increasing, not because of anything they do or publish, but simply because of what they are.

Linear regression fits for logarithm-transformed journal impact factor and citations for nine co-published consensus reporting statements. (Source)
Linear regression fits for log-transformed journal impact factor and citations for nine co-published consensus reporting statements. (Source)

If I were to be somewhat more controversial, I would suggest it is also indicative of the fallacy of ‘quality’ concerning a study’s outcome. You could have the perfect study – flawless methods, conducted per protocol, well reported with the data available, and the likelihood is it would show nothing interesting. To my mind, this is good science. What is it not though, is interesting science (at least not necessarily). The impact factor and the fact that it is so ingrained in our collective consciousness is a sign that we care less about the science and more about the headline – citations don’t measure quality, they measure activity, which are not the same thing.

What impact do you think this might have on the use of impact factors in the future as an assessment tool?

Remarkably little, unfortunately. With this study, I have been able to demonstrate something many people have suspected for years. The thing is, even though we now know this to be the case, no one seems willing to change. Even those who are outspoken critics of the impact factor still play the game. I am in the fortunate position that my career is in no way dependent on my publications, so it is easy for me to point out the absurdity of it. For those in academia, so it is a more risky prospect. In order for anything to change, those at the top – both journals and academics – need to take the first step. But the game is working out well for them, so there’s no incentive.

Why do you think research, as a culture, loves the impact factor, in spite of such widespread criticism?

It’s easy. There’s a longstanding joke that a researcher’s primary aim is to read as little as possible, and impact factors allow them to do just that. They can use the journal impact factor to interpret the articles as high-quality and then ignore everything in a lower impact factor journal unless it forces its way to their attention. To be honest, this makes sense is a perverted sort of way, millions of articles are published every year, so you can’t read them all and need some way to filter them. Until someone comes up with a better way, the impact factor will remain. What needs to change though, is its use in assessment.

Image credit: Jorge Cham
Image credit: Jorge Cham

There’s a longstanding joke that a researcher’s primary aim is to read as little as possible, and impact factors allow them to do just that.

Do you have any advice for junior researchers concerned about impact factors?

More sympathy than anything. There are two sides to these results and the one I’d really rather people didn’t focus on is that the assumption that publishing in a high impact factor journal will get your article more cited seems to be entirely true. The advice I would give is simply to find the best journal for your article – many other things are correlated with citations, including the field of research and publication model (open access does significantly better than pay-walled articles, for example). Work out what ‘success’ would look like for you – do you want citations? Eyes on the page? Potential for collaboration? – and then identify the right journal for your work. Don’t be afraid of going for a lower impact factor journal, as they might be better in the long run.

Any thoughts on altmetrics as an alternative to the impact factor? Do you think these are good indicators of ‘impact’?

I think it really depends on what you consider an impact factor as trying to do. They are a very good measure of the mean number of citations over a one-year period to articles in a journal published in the previous two years. What is not clear, however, is why that matters, or even what that means. Impact and quality are not the same thing; they’re not even related – just read the Daily Mail if you want proof of the latter [edit for non-Brits: The Daily Mail is a garbage tabloid published in the UK]. The only alternative I am aware of is altmetrics, which provide a different view of ‘impact’, although these are just as clearly self-correlated. The main advantage of altmetrics is that they are at least at an article level, rather than a journal level, so whatever they represent you know that it’s true for that article.

What can those in a position of leadership do to draw researchers away from mis-using impact factors?

They really need to lead. Senior scientists need to ignore impact factors when choosing where to submit, and when recruiting/promoting people. The big journals need to sign onto DORA (the San Francisco Declaration of Research Assessment) and stop using their impact factor for marketing. Funders need to stop considering impact factor when allocating funds, and measuring output. In order for this to change, it needs to be top down, yet for some reason most of the discussion I see seems to be bottom up.

Senior scientists need to ignore impact factors when choosing where to submit, and when recruiting/promoting people.

How do you see the future of assessment in scholarly publishing? What are the steps we need to take to get there?

I think an awful lot needs to change. I’ve actually written and spoken on this rather a lot, so am obviously very biased, but I think assessment needs to move away from the journal level and possibly even the article level. Science needs to be assessed at the study level – was it well designed and conducted? Can the study be repeated? Can the results be built on? Can they be replicated? A single study can lead to multiple articles, across multiple journals, which can make finding all the information like looking for a needle in a haystack. This is a symptom of measuring activity not quality.

Publishing and assessment need to change. Personally, I would like to see it move towards living documents, where all the information is recorded in real time – the protocol uploaded, the statistical analysis, full data and tools. This means that anyone looking to find out information about your study, or to use or build on it, has access to that information. It would also preclude some of the issues we currently face, like selective reporting, HARKing (hypothesising after the results are known), p-hacking etc. If we moved toward this, the concept of a journal would change, moving from somewhere that publishes articles, to somewhere that collates and curates research, sifting through the thousands of studies out there and building a collection, complete with readable research synopses. Then assessment would be based on the quality of the science, and publishing would allow people to identify and read what is relevant to them. Needless to say, we have a long way to go first though!

Thanks Daniel, it’s been great to get your insight and hear about your new paper!

Image credit: Daniel Shanahan
Image credit: Daniel Shanahan

Daniel Shanahan joined BioMed Central in 2013 as Associate Publisher, where he oversees a portfolio of broad-scope biomedical journals and the ISRCTN trial registry. In his role he drives a number open science and research transparency strategies and initiatives, including chairing the Threaded Publications (Linked Clinical Trials) working group with Crossref. He is part of the Advisory Boards for OpenTrials and the Methods in Research on Research (MIROR) project, is a member of the CONSORT Group and has helped develop a number of research reporting statements, and is participating in strategic efforts to encourage the wider adoption of reporting guidelines, and to improve policies to combat publication bias and selective reporting, among others.