In:  Altmetrics  

The relationship between journal rejections and their impact factors

Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work.

Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.

The new data from Frontiers shows that this perception is most likely false. From a random sample of 570 journals (indexed in the 2014 Journal Citation Reports; Thomson Reuters, 2015), it seems that journal rejection rates are almost entirely independent of impact factors. Importantly, this implies that researchers can just as easily submit their work to less selective journals and still have the same impact factor assigned to it. This relationship will remain important while the impact factor continues to dominate assessment criteria and how researchers evaluate each other (whether or not the IF is a good candidate for this is another debate).

I wanted to look into this a bit more to see how this pattern changes when we look at different partitions of the dataset. For example, one might think that this pattern is driven by a prevalence of low impact and highly unselective journals. Also, in the Figure reported by Frontiers, the y-axis (IF) is log-transformed for some reason – it’s not clear if the data are transformed, but either way this distorts the correlation reported either visually or statistically, so I figured it would be good to look at the raw data again here.

Thankfully, the dataset is published via Figshare and openly available to all for analysis.

Relationship between rejection rates and impact factors for (see here for sources). (click for larger image)

As you can see, based on the full dataset, whether or not the impact factor is log-transformed has little effect on the overall structure of the data. What is clearer though is that those journals with the highest impact factors tend to have the highest rejection rates, of around 0.9 (or 90%). As Frontiers report, the correlations for this are quite weak based on a combination of parametric and non-parametric tests (those reported are Pearson’s, Spearman’s and Kendall’s tests). However, the correlations are an order of magnitude higher than those reported in Frontiers, which is most likely due to the log-transformation of the raw data, which I didn’t do here (the R script used to play with this data can be found here, and by appending the file extension from .txt to .r).

Importantly for researchers, there appears to be a range of journals with impact factors between 5 and 10 (i.e., moderate) that have extremely low rejection rates. These are what we might refer to as ‘good’ journals, as your likelihood or being published with them will be higher. Of course, what this implies is that the impact factor is a very poor predictor for the perceived quality of work, based on the probability that it will be rejected or accepted by differently ranked journals. You might as well shoot for a journal which is 10 times more likely to accept your work than a highly selective one with an equal impact factor.

If we look at a partitioned dataset, by excluding all journals with a rejection rate of lower than 0.6, then a slightly different structure emerges. The correlation strengths increase, and begin to show that within journals that tend to have higher rejection rates, those rejection rates correspond weakly to journal impact factors. This correlation is undoubtedly skewed by the few highlighted (and un-named, unfortunately) journals, which comparatively have anomalously high impact factors for their rejection rates, way above the usual trend. Importantly though, it shows that there might be a different pattern between ‘mid-tier’ journals and those with lower rejection rates.

As above, but discarding all data with a rejection rate below 0.6.
As above, but discarding all data with a rejection rate below 0.6.

However, if we look at just these more selective journals (i.e., above 0.9 rejection rate), then we see that still there is no correlation between rejection rate and impact factor, even for this anomalous subset of the data. This is because there are still plenty of journals out there that have very low impact factors but very high rejection rates. So comparatively, would make a poor choice of journal, if impact factor played into your selection criteria for submission.

As above again, but discarding all journals with a rejection rate below 0.9.
As above again, but discarding all journals with a rejection rate below 0.9.

As the Frontiers article points out, this data is good evidence against the notion that to obtain a high IF, your journal must be highly selective and reject a lot of research. This is actually really important for both publishers and researchers, as it tells us that the amount of time and money which is wasted chasing higher IFs only serves to increase rejection rates, and not the impact factor of journals. Furthermore, it shows that if we assume IFs measure some aspect of journal or article quality, as many do, then this has very little to do with selectivity of journals based on a priori assessment.

The IF originates from the subscription-based era of publishing and was originally intended to help librarians to select journals worth purchasing. It neither reflects the actual number of citations for a given article nor its scientific quality. At ScienceOpen, we believe that alternative metrics that measure “impact”, “relevance” and “quality” at the article level and by various other means will replace the IF sooner or later. This is why ScienceOpen supports the San Francisco Declaration on Research Assessment (DORA), and why we report altmetric scores for every article within our archive.

What would be really cool in the future is to expand upon this dataset. Adding journal names would be an obvious benefit for researchers, so they could see which journals might be better candidates for submission. Another dimension could be to include aggregated journal altmetric scores, which would allow us to explore whether highly selective journals get the most social attention for the research they publish. Another aspect to investigate might simply be the number of articles published against the rejection rate. Either way, it’s a really nice dataset to explore some of the more detailed aspects of publishing, and we thank the Frontiers team for publishing it.

6 thoughts on “The relationship between journal rejections and their impact factors”

  1. It would also be interesting to know how the rejection rate after being sent for peer review varies between journals of different IF. Because referees are likely to be similar in all cases, I suspect the actual rejection rates post-review are similar and high rejection rates are primarily because of immediate rejection by editors because they feel papers are out of the journal scope and/or will not be highly cited.

    So is one message from this that we may as well submit to high IF journals because they are just as likely to be accepted as low IF journals? With the added benefit that if accepted in a high IF journal then our peers, employers and funding agencies think it is somehow ‘better’ research.

    In the long-term, ones individual citation rate is a better ‘indicator’ of how widely read ones papers are and the side of that research field. So perhaps best to publish in respectable reasonable cost open-access journals, especially if the top journals immediately rejected your paper.

  2. Interesting work and analysis: see also the preliminary study undertaken by Bjork and Catani “Peer review in megajournals compared with traditional scholarly journals: Does it make a difference?” which hints at the same findings (they found little difference in citation between articles from similarly-ranked journals that operated either high-quality or lightweight peer review).

  3. There are many factors known to play a greater role in influencing a journal’s IF than rejection rate. Scientific discipline has already been mentioned. Prestige/reputation is an obvious one: Science, Nature, and Cell have high impact factors because their reputation is such that scientists submit their best work to these journals. Journals that publish a high percentange of review articles get higher IFs because these articles get cited more frequently (restrictions on the number of references allowed per article reinforce this trend; thankfully these seem to be on their way out). All of this seems somewhat beside the point. Are impact factors a measure of scientific quality? No – they are an average across an entire journal citation frequencies. In my mind, the best way to address issues of quality is through strengthening the peer review process (incidentally, many open access journals have made their peer review processes less rigorous or even eliminated them altogether). Given that IFs do not measure quality, should they be a factor in hiring, promotion, or tenure? Probably not – but to the extent that they impact the ability of PIs to secure funding for their research, they will be. Universities are in it for the money, just like everyone else. I don’t think we should be looking to replace one flawed metric with another, but should rather encourage funding agencies and university committees to take the time necessary to actually evaluate a scientist’s work instead of using a mathematical composite score as a convenient yet deeply flawed surrogate.

Comments are closed.