Blog
About

In:  Impact Factor  

How can academia kick its addiction to the impact factor?

The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).

Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.

As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.

The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.

Oh, I didn’t realise it was that simple. Problem solved.

Eventually, we agreed on the point that finding a universal solution to impact factor mis-use is difficult. If it were so easy, there’d be start-ups stepping in to capitalise on it!

(Note: these are just smaller snippets from a larger conversation)

What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.

So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.

  1. Detox on the Leiden Manifesto

The Leiden Manifesto provides a great set of principles for more rigorous research evaluation. If these best-practice principles could be converted into high level policy for institutes and funders, with a major push for their implementation coming from the research community, we could see a real and great change in the assessment ecosystem. With this, we will see a concomitant change in how research develops and interacts with society. Evaluation criteria must be based on high quality and objective quantitative and qualitative data, and the Leiden Manifesto lays out how to do this.

  1. Take a DORA nicotine patch

The San Franciso Declaration on Research Assessment (DORA) was started in 2012 by a group of Editors and publishers of scholarly journals in order to tackle malpractice in research evaluation. It recognised the inadequacies of the impact factor of scientific quality, and provided a series of recommendations for improving research evaluation. These include: eliminating the use of journal-based metrics; assessing research based on its own merits;  and exploring new indicators of significance.

To date, 7985 individuals and 589 organisations have signed DORA. That is less than the number of researchers boycotting Elsevier, and the number of global open access policies, respectively, so there is still much scope for communicating and implementing these recommendations.

  1. Attend ‘objective evaluation’ clinics and bathe in a sea of metrics

In 2015, a report called ‘The Metric Tide’ was published following an Independent Review of the Role of Metrics in Research Assessment and Management. This was set up in April 2014 to investigate the current and potential future roles that quantitative indicators can play in the assessment and management of research.

They found that peer review and qualitative indicators should form the basis for evaluating research outputs and individuals, with careful use of metrics as a supplement. This will help to capture more diverse aspects of research, and limit concerns arising from the gaming and mis-use of metrics such as the impact factor. They also advocated the responsible use of metrics based on dimensions of transparency, diversity, robustness, reflectivity, and humility – 5 traits, neither of which the impact factor possesses.

struggling-scientist

  1. Vape your way to a deeper understanding of impact factors

To summarise well-documented limitations of the impact factor (from DORA):

  1. Citation distributions within journals are highly skewed;
  2. The properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews;
  3. Journal Impact Factors can be manipulated (or “gamed”) by editorial policy;
  4. Data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.

Recent research has also shown that impact factors are strongly auto-correlated, becoming a sort of self-fulfilling prophecy. It is deeply ironic that researchers, supposedly the torch-bearers of reason, evidence, and objectivity, persistently commit to using a metric that has been so consistently shown to be unreasonable, secretive, and statistically weak. To learn more, see Google.

Understanding thine enemy is the first step to being able to defeat them.

  1. Chew on the gummy content of the paper

Knowledge is nicotine for researchers. There will never ever be a metric that surpasses the value of assessing the quality of a paper than reading it. ‘Read the damn paper’ has even become a bit of a rallying cry for the anti-impact factor community, which makes perfect sense. However, there are often situations on which assessment of huge swathes of papers and other research outputs has to be conducted, and therefore short-hand alternatives to reading papers are used as proxies to measure the quality of papers – such as the impact factor, or the journal title. This becomes a problem when quality and prestige diverge, as is very common, as they are no longer reflective of the same traits. Solutions exist, such as employing greater numbers of people in assessments, submission only of key research outputs, which enable the process of being able to digest the content of an article or other output and being able to make more-informed assessments of research. It is vastly unfair and inappropriate that researchers, funders, and other bodies are put in a position where they are unable to commit to these and forced to use inappropriate shortcuts instead. However, when time and volume is not an issue, there is simply no excuse for evaluating work based on poor proxies.

Credit: Hilda Bastian
Credit: Hilda Bastian
  1. Just quit. Go cold turkey.

As someone who used to smoke, I finally quit by going cold turkey. Partially because I could no longer afford to keep up the habit as a student, but that’s besides the point. The point is to make a personal commitment to yourself that you will no longer succumb to the lures of the impact factor. Reward yourself with cupcakes and brownies. You owe it to yourself to be objective, to be critical, and to be evidence informed about your research, and this includes how you evaluate your colleagues’ work too. Commitments like this can be contagious, and it always helps to have the support of your colleagues and research partners. Create a poster like “This is an impact factor free work environment” and stick it somewhere everyone can see!

Something like this, but with more or less raptors depending on preference.
Something like this, but with more or less raptors depending on preference.
  1. Don’t hang around other impact factor junkies

The first rule of impact factors is we don’t talk about impact factors (irony of this post fully appreciated). This is ‘how to kick an addiction 101’. When you quit smoking/drugs/coffee, the last thing you want is to be hanging around others who keep doing it. It’s bad for your health, and just drags you right back down the path of temptation. If someone insists on using the impact factor around you, explain to them everything in this post. Or just leave. They’re simply adopting bad practices, and you don’t want to or have to be part of that. If it’s your superior, a long frank discussion about about the numerous problems and alternatives of the impact factor is well worth your time. Scientists are well known for being completely reasonable and open to these sorts of discussion, so no problems there.

  1. Take a methadone hit of sweet sweet altmetrics

While this has sort of been covered by the first three points, but altmetrics, or alternative metrics are a great way of assessing how your research has been disseminated on social channels. As such, they are a sort of pathway or guide to ‘societal impact’, and provide a nice compliment to citation counts, which are often used as a proxy for ‘academic impact’. Importantly, they are at the article level, so do not suffer the enormous shortcomings of journal-level metrics such as the impact factor, and offer a much more accurate insight into how research is being re-used.

What other solutions can we implement in eliminating the impact factor, and making academic assessment and publishing a more fair, transparent, and evidence-informed process? The Metrics Tide report, DORA, and the Leiden Manifesto are all great steps towards this goal, but the question still remains of how we embed their recommendations and principles in academic culture.

We should be very aware that there is absolutely nothing to lose from employing these recommendations and partial solutions. What we can gain though is an enriched and informed process of evaluation, which is fair and benefits everyone. That’s important.

41 thoughts on “How can academia kick its addiction to the impact factor?”

  1. I don’t mean to be petty, and I agree the JIF needs to go, but this whole list is very much useless. It does not address at all the underlying structural problems that have propped up the JIF. We have “absolutely nothing to lose” by following these recommendations? That’s very naive. As a researcher, you’ll be out of a job/funding if you follow these instructions. As a publisher, you’ll go bankrupt. Like any impossibly impractical “plan”, its success depends critically on massive, simultaneous action by all participants- which is not going to happen no matter how wonderful the end result would be. Look at our society and problems like racism, sexism, ableism, etc. Your stance on the solution to the JIF is like arguing that racism is solely due to individual character flaws while ignoring structural racism.

    1. Actually the entire post was about addressing structural change through broad-scale initiatives combined with individual action. See points 1-3 for examples of system-wide solutions that can be implemented.

    2. I was reviewing a paper on this very theme yesterday. I think Jon captured some of the key arguments very well. DORA and Leiden both point to key issues and propose sound solutions. I rarely see researchers describe the quality of their work by pointing to the IF of the journal in which they publish. Sure, there’s a subjective clustering of title quality, but this varies by geography (I’ve worked in academe in a few countries and can attest to this). But the JIF is used by publishers to sustain subscription rates!

  2. Kimmy is quite right I would say. I started reading your post and then skimmed the broad topics of your numbered list. None of them address the key – structural – issue: even if everyone knew (and most people know a lot) everything that’s bad about the IF and nobody ever would use it for their own evaluations, everyone would still think that everyone else is using the IF. Even if every researcher would stop using the IF and role their eyes whenever someone brought it up, there are still many countries where administrators use it without even being accessible to anybody who could tell them how moronic and counter-productive this is.

    The issue, in most places, is not only that someone explicitly asks for the IF of the journals they published in, but that even in places where this doesn’t happen, even in places (like the UK) where it is explicitly excluded (i.e., REF), people still think they will be evaluated by it, no matter how many people know that using IF is wrong, no matter how often you have to give it to them in writing that this kind of evaluation will not happen.

    Evaluation by IF happens explicitly in many places, implicitly (i.e., by journal name) in even more places. Therefore, everyone just assumes that they will be evaluated by IF, no matter how accurate this assumption is. For this assumption, the places where IF is actually used don’t even need to be particularly numerous. They just need to be talked about every once in a while. A paper in CNS will never hurt non-IF-based evaluations, but *not* having one could hurt in IF-based situations. So, in a hyper-competitive environment, you err on the side of caution: in all places where IF is irrelevant, CNS won’t hurt you, in all other places, it’ll help – a lot. In today’s cut-throat competitive, neoliberal environment, only reckless fools or overconfident jackasses discount journal rank.

    Kimmy is also right that such massive collective action problems are virtually impossible to solve. That’s the main reason I argue for the abolishment of journals. To d that, we just need to convince relatively few infrastructure experts in central places that their money is better spent on actual, value-adding services than wasting it on shareholder value and paywalls. The collective action problem shrinks from convincing a few million to convincing a few hundred.

    In the end, what we need to archive our work and discuss it, is an infrastructure. Inasmuch as the infrastructure involves communication, you could compare journals vs. modern IT-services with the Pony Express and skype: both the Pony Express and Skype get messages from one person to the other, only Skype does it much better, faster and with additional features. There is a reason your institution is paying for Skype, but not for the Pony Express, not even the telegraph or the phone. For that same reason, our institutions should not pay for the equivalent of the Pony Express: journals. Instead, they should pay for the Skype equivalent: a modern digital infrastructure that provides the same services as journals, only faster, better with more functionalities.

    With the equivalent of Skype being expensive, at least equally cumbersome and time-consuming plus potentially risky for your career, while the safe bet was to use the Pony Express, what do you think the chances are that some people will use Skype just because someone on the internet tells them it’s the ethically right thing to do? What do you think the chances are that not just some, but a majority of people would switch, eventually bringing down the massively profitable, massively lobbying Pony Express?

    Kimmy is dead on: asking people to do these 8 things is about as useful as asking people to pretty please:

    1. wash every single item of trash in their house
    2. separate each item into its components
    3. collect all the items in separate bins
    4. drive the bins across town to the recycling station
    5. wait in line in front of each container
    6. dump your recyclables
    7. pay a fee
    8. Participate in a lottery where you might loose your job

    All the while they have an enticing trash container right in front of their house with none of the steps above and no risk of losing the job. What do you think most people will do?

    1. I’m actually a little surprised by this response. The first three points of the post are all about addressing structural reform by providing recommendations/principles upon which to initiate cultural change. While I agree that the problem is structural, I don’t think these potential solutions should be so easily dismissed. Rather, I think they should be drilled in further – how else do you get positive change? I would also say that they are much more easily reached at the present compared to discarding the entire journal system.

      Also your analogy doesn’t really work, seeing as these are mostly things that require little-no physical action, besides knowledge generation (i.e., the day job for researchers). They can also be enacted completely independently and individually. This post wasn’t to show 8 things you have to do, more 8 potential solutions that will help combat a broken system. While there might be consequences, this is why addressing the issues with a top down-bottom up combo, as mentioned, is important. What do we want? Structural reform! When do we want it? When the appropriate policies are in place.

      1. Don’t get me wrong, none of your points are false. If we followed them, we’d get rid of JR in no time. However, I’d love to live on a planet where most people were likely to follow such ethical rules just like that. Where can I find it?

        A crucial problem I see with your 8 points is that none of them will do it alone, so effectively all of them have to be followed. Put differently: every single person in academia must follow at least some of the prescriptions for any change to happen. As long as a few do #1 a few do #4 and again some others find #7 attractive, nothing is going to happen.
        The pipe-dream that it might actually be feasible to get everyone to behave against their own self-interest (or even just to engage with something they have no interest in!) for a common good is one of the main reasons that the only things we have been able to accomplish in 21 years is to get more people to talk about change (i.e., subsidize the airlines) – and to liberate, on average, 1% of the literature per year.

        Why do you think people use Skype/Hangout without 8 rules telling them to not use the phone?
        Why do you think people dug up the ground all over the world in the late 80s and early 90s to pull copper and later fiber wires from university to university without eight rules asking faculty to please login to the mainframe to use email to correspond with the 9 other people on the planet who had an address? A lot of change happens without asking your 70 year-old thesis advisor to pretty please change the way they have been publishing for 40 years. 🙂

    2. Excellent post Bjorn! The problem is just as much about the *perception* of how research assessment is done, as you say. We need to move to a much broader means of assessing research and researchers instead of relying almost entirely on *where* they publish.misuse of Impact Factors is not the cause of the problem, it’s an effect of it.

      1. Couldn’t agree more, but still combating mis-use of the IF is a way to start addressing it. Especially as there is very little, again besides the first three points in this post, that look into addressing the system of academic assessment.

        1. While I would have subscribed to this notion not too long ago, the more I think about it now, I become less convinced.
          What if combating JIF only leads people to embrace other, equally flawed metrics?
          What if combating JIF only leads to people make up their own journal preference that essentially impossible to combat?
          What if combating JIF only leads to more publicity for the opponents who just reply “it may be flawed, but it’s the best we have”?
          Of course, nobody can be sure about that and, after all, JIF is an easy target – almost too easy.

          1. Re. point 1: This is why recommendations like in the Metrics Tide Report etc are important.
            Re. point 2: Isn’t that what people do anyway essentially? (based around the concept of ‘prestige’?)
            Re. point 3: This is why we need real, structural, systemic, political, social, and cultural solutions.

            This is just one small piece of a very complex puzzle.

          2. Good point. I discuss that some time ago with dr ramaekers in belgium: and metrics is like having notes at schools. Everybody knows it is counterproductive, does not give a right assessment of the work and provide incentive for competition instrad of collaboration. But everybody is still using it.
            The solution: 1stop using it yourself. (Do not use metrics at all) 2. Tell the world. 3 find alternative schools for your kid (phd student)

  3. This is a very interesting post, thanks for the contribution on the subject, and all the recommendations. I would like to highlight some points that require wider discussion:

    * IF is a measure of how much attention you get for your publication, but not about the actual opinions or measurable quality of the research. If you look at altmetric you also get a measure of attention (how many tweets, facebook pages, etc) with some weighting that should reflect the importance of the source, but it still does not get that bit of quality assessment that is needed. You could fabricate a pseudo-science paper in an obscure journal or self-publication plattform with a controversial title or message and get tweeted and liked by thousands of (place-any-topic-here)-deniers.

    * IF and altmetrics are unfair to non-english speaking languages. The dominance of english in the international academic sector means that research written in other languages will seldom get high ranks in any metric, because they are disseminated in a smaller and more fragmented audience. However, we do need research output in our native languages, we need to convince local leaders and politicians to implement policies based on research, we need them to understand risks and opportunities, and to discuss strategies based on scientific knowledge.

    * Attention is also very skewed, and this is true at so many geographical, political, social levels… for example a research on skin-cancer will draw media, public and researchers attention alike, the discovery of a new dinosaur will draw a lot of kid’s attention, and so on, but some very basic and local research remains ignored by a large part of the public. I think this will never change, and is not bad at all, some scientist might be interested to work in their local environment, delivering data or recommendations for dealing with local problems, documenting local out-breaks of diseases, developing methods or best-practices for improving crop yield, etc. But these researchers also need mechanisms for getting acknowledged for their contributions within their field of influence.

    * I think we do an every-day assessment of publication quality (point 5 in the list above). We, as researchers, do read papers every-day, and we have some opinions about them, we recommend to colleagues or ask students to work on them. The problem is that this opinions are not shared systematically with a wider audience. Post-publication peer-review plattforms can do a great job, but they also depend on how much attention you get from peers, and I think the number of users is still very low. What we need is a set of mechanisms for assessing quality in simple steps, that is, in every day actions, think of some buttons like “This article was useful for me!” next to the article download options, or short surveys that allow to assess four or five key measures of quality (scope, methods, conclusions, etc), or a method that can evaluate comments like: “this article has a very local scope”, “methods are good, results are interesting but discussion was biased”, etc. This could be implemented in academic-social-networks like academia.edu or ResearchGate, this could be endorsed by publishers themselves (putting a “comment” or “your opinion” button next to article).

  4. I think it is important not to conflate JIF and journal prestige. For example some people may want to publish in Nature because it has an IF of ##?, but many others probably want to publish in it because it is Nature. Once one gets into the large mass of journals that both have little broad reputation or very high IF then, perversely researchers may well use the IF more at the moment to discriminate between them and decide where to send a manuscript. Which is both sloppy and probably has little impact on the ‘quality’ of their own CV.
    But my point is there has always been a certain prestige hierarchy between journals, and JIF has merely formalised this in a numeric sense (we can argue about how well it does this and structural flaws, but it doesn’t invalidate the underlying point). Merely removing JIF would not address this. I can think of the discipline specific journals in my field (and I’m sure everyone can) which researchers are pleased to get papers published in, not necessarily (just) for the JIF, but for the general prestige of having a paper in a journal recognised by ones peers as being of generally good quality.
    Therefore if JIF was removed overnight it may see some change in publishing/submitting practice, but people in my field would still want to publish in Geology, Geophysical Research Letters, Water Resources Research etc, because they are seen as quality journals independent of whatever number they have for their JIF.
    So what I’m saying is that removal of JIF would get rid of a lot of nonsense and metrics, but it would be highly unlikely to change the fact that some journals have a better reputation than others and many people will want to publish their papers in journals with a good reputation. Therefore publishing practice may not see a dramatic shift in behaviour in the absence of JIF.

    1. Hey Simon,

      Thanks for your comment. Yeah, I totally agree that JIF is just one part of what we view as journal prestige, or the brands associated with a journal. The thing is that it’s difficult to quantify prestige, and therefore it’s precise correlation with the JIF is probably unknown, although there most likely is a relationship between the two. However, even eliminating the JIF so that people shoot for what they perceive as higher quality journals in a more objective way could be considered progress. It might help people to think more in depth about why they are submitting to certain venues due to factors like acceptance-publication time, editorial quality, speed of peer review etc., which would be great. Of course, none of this can happen if people are still defaulting to using the JIF to decide where to publish.

      1. Actually, see here for such correlations:

        Gordon, M. D. (1982). Citation ranking versus subjective evaluation in the determination of journal hierachies in the social sciences. J. Am. Soc. Inf. Sci. 33, 55–57. doi: 10.1002/asi.4630330109

        Saha, S., Saint, S., and Christakis, D. A. (2003). Impact factor: a valid measure of journal quality? JMLA 91, 42–46

        Yue, W., Wilson, C. S., and Boller, F. (2007). Peer assessment of journal quality in clinical neurology. JMLA 95, 70–76

        Bottom line: IF aligns very well with subjective journal ranking.

  5. I am amazed how easy all you accept it is kind of addiction. Shouldn’t we be setting up a kind of IF Anonymous so that all that people could go for discrete and effective treatment?

  6. Hi Jon,

    I am more and more convinced that this talk against impact factors could in fact be dangerous. In a similar way that requesting open access has backfired and led to the gold OA model that has been a terrible development because it has pacified the community’s momentum without addressing the core problem in academia. This core problem is simply that Universities waste billions for subscriptions (or APCs) instead of developing and offering for free a modern scholarly communication infrastructure that will include —among other essential services now partially provided by startups that eventually become absorbed by big publishers— a sophisticated and efficient reward system.

    Publishers are a thing of the past. The sooner we realise that and start organising our communities and associations around public infrastructures the better. One big problem is that publishers throw every now and then some crumbs to academics, politicians, associations, trying to keep everyone happy and calm. But hopefully, infrastructure people will eventually wake up and Universities will start offering, apart from an email account, free services for co-authoring, reference managers and other discovery tools, data-mining algorithms, software code, data visualisation tools, cloud archive for code, data and manuscripts, peer review, usage statistics, etc, etc. Until all this infrastructure is provided for free by institutions to their faculty we will still need publishers, impact factors, and promising startups by former academics who decided to abandon research to save the world. I think that asking researchers, especially young ones, to stop talking about impact factors (as suggested in most of your points) and concentrate on how to find the money for APCs is asking them to put their careers at risk and is utterly unethical.

    1. Hey Pandelis,

      Thanks for this thoughtful comment. I think I agree with you that the current APC-driven of OA is quite wasteful due to the overall lack of development of a competitive market. I wouldn’t say it’s a waste, but it’s not exactly doing it efficiently, and that’s because economic sustainability (ie preserving publishers) will always be a key driving force behind these policies. However, I do agree that a better scholarly communication system needs to be established than what we have at the present. I am yet to see a roadmap appear to establish that though.

      So I’m not asking people to compromise their careers. I’ve said in previous posts that the responsibility to change must come from the top down, but that doesn’t mean we can all help to influence the change. I don’t think that asking your department/institute to sign DORA is a risky thing for your career, as I’ve done it and seem to be doing ok! There are a whole range of actions one can take, and these are just some potential solutions. I also didn’t mention anything about them asking for money for APCs.

      1. Hi Jon,
        of course it doesn’t do any harm to support such initiatives. I just doubt it does any good. I direct you to a recent exchange of comments I had with David Crotty from the Scholarly Kitchen: https://scholarlykitchen.sspnet.org/2016/03/23/ask-the-chefs-what-is-the-biggest-misconception-people-have-about-scholarly-publishing/

        There, he correctly notes that whatever alternatives to the IF we come up with, publishers will simply adapt. We ask for OA, publishers are there, we ask for altmetrics, publishers provide, we ask for open peer review, no problem. We call them “bad”, “good” publishers immediately appear to fill in the gap and get their market share. It’s just useless. You can’t compete with that. And I seriously doubt who really wants to compete and bring change and who is just trying to invent ways to shift authors’ behaviour in order to get a piece of the pie.

        I can only repeat that the solution is to forget about publishers altogether and offer the necessary infrastructure for research preparation, discovery, analysis, writing, publication, outreach, assessment and archiving for free as standard services offered by Universities and research institutions.

        1. Hey Pandelis,

          Again, I think that as a solution what you outline is fine, and is very much in line with what Bjoern and others have mentioned. The question is how do you get there? We’re talking about massive systemic, political, and cultural changes here that require massive-scale engagement between a range of stakeholders. If you think about this as a process rather than just a solution, then you can start to outline the steps to get there. If you want the solution you describe, killing the impact factor (and perhaps the journal) is a step towards getting there. I also echo Stephanie’s comments below, and don’t think your comments about not needing education or to take responsibility do much to help the current situation, when what I gravely think academia needs is a lesson in accountability!

    2. What kind of researchers are we “educating” if we tell young, curious, aspiring scientists that they have to care about the impact factor of where they publish their research results? Guido Guidotti, Harvard biochemistry professor told me of a post-doc applying to join his lab. When he asked what project the candidate was interested in, the young man said “I don’t care, as long as I get a paper in Nature or Cell in the first year.” Are those the kinds of researchers who are going to genuinely move science forward? I have spoken to so many young researchers who have told me that they are completely disillusioned by science because they thought it was about finding solutions to problems and are taught along the way not to share so that they can be the first to publish and maximize their brownie points with a high impact factor journal. We are losing many bright scholars and promoting those who are best at playing the reputation game. But there are also young researchers out there fighting for a more open and rigorous evaluation system and still making a career in science. Maybe their path is harder, but I don’t think that it is “unethical” to point to them as role models.

      1. Hi Stephanie,
        there is absolutely no need to educate us! I didn’t receive any education to use my institutional email account. It was just provided for free to me. I use overleaf to co-author my papers, but if my university offered me a free alternative I would immediately switch to it. I publish my papers in high IF journals, but if my institutional repository/platform offered an alternative reward system recognised by evaluating committees I would immediately forget about journals and impact factors. We don’t need education, we need free, public, federated alternatives. We are not responsible for the system and I insist that it is not appropriate to ask researchers to save the world by switch from “bad” publishers, to “good” publishers. To us they are all these same. Commercial enterprises that take advantage of a gap of services that were meant to be offered for free by research institutes. There is more than enough public money for that. That’s why I say that I don’t want my University to sign DORA, I just want them to stop paying subscriptions and APCs and to offer me the tools I need to do my research (and receive proper recognition for it). In the meantime, I will continue to prefer a Nature publication than any open access journal made by —former— researchers for researchers! And I seriously question the motivation of anyone who is trying to blame me for this choice 🙂

  7. I am a physicist working in general relativity and gravitation. I think that the freely accessible database ADS by NASA provides the community with very valuable tools. They are a rather complete set of several different numerical indexes (not only the h-index) which capture different aspects of a researcher’s output. In particular, attention is paid also to the problem of the self-citations by means of the tori and riq indexes, too often tacitly negelcted in such kinds of discussions. I invite people from other disciplines to have a look; maybe, that approach could be exported also in other fields. Comments are, of course, welcome.

    1. Hi Lorenzo,

      Thanks for your comment. I agree that looking at a diversity of metrics is much more beneficial, as well as understanding where the data come from. I find it very odd that a non-reproducible, secretive, and non-sensical metric such as the JIF is so widely used still within academia.

  8. thx for a very interesting article. the solution i missed was getting together with all folks in your field to decide on journal ranking. works very well at university level. and how about the yearly ranking by survey of profs in germany?

  9. Knowledge is nicotine for researchers. There will never ever be a metric that surpasses the value of assessing the quality of a paper than reading it. ‘Read the damn paper’ has even become a bit of a rallying cry for the anti-impact factor community, which makes perfect sense. However, there are often situations on which assessment of huge swathes of papers and other research outputs has to be conducted, and therefore short-hand alternatives to reading papers are used as proxies to measure the quality of papers – such as the impact factor, or the journal title.

    http://web-giadinh.com/thiet-ke-web-gia-re-chuyen-nghiep/

  10. “We should not do so much fuss on Journal Impact Factor.If you have quality (true/sathya) paper every one will accept it.” Even policy maker go out of the way and help you.

  11. Thanks for this important post, but unless academic administrators, regulatory bodies, and funding agencies keep asking for this, I do not foresee any solution. However, as suggested by you, it will be good to adopt alternatives to see the real impact of research.

Leave a Reply

Your email address will not be published. Required fields are marked *