The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.
I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.
We’re continuing our series on highlighting diverse perspectives in the vast field of ‘open science’. The last post in this series with Iara Vidal highlighted the opportunities of using altmetrics, as well as insight into scholarly publishing in Brazil. This week, Ernesto Priego talks with us about problems with the scholarly publishing system that led him to start his own journal, The Comics Grid.
There was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way.
Hi Ernesto! Thanks for joining us here. Could you start off by letting us know a little bit about your background?
I was born in Mexico City. I am Mexican and I have British nationality too. I studied English Literature at the National Autonomous University of Mexico (UNAM) where I also taught and was part of various research projects. I came to the UK to do a master’s in critical theory at UEA Norwich and a PhD in Information Studies at University College London. I currently teach Library and Information Science at City University London.
When did you first hear about open access and open science? What were your initial thoughts?
I cannot recall exactly. I think I first encountered the concept of ‘open access’ via Creative Commons. I was a keen blogger between 1999 and 2006, and I remember that around 2002 I first came across the concept of the ‘commons’. I think it was through Lawrence Lessig that I really got interested into how scholarly communications were incredibly restrictive in comparison to the ideas being discussed by the Free Culture movement. Lessig’s Free Culture (2004) changed things for me. (For more background I recently talked to Mike Taylor about why open access means so much to me in this interview).
We need to think about the greater good, not just about ourselves as individuals.
You run your own journal, The Comics Grid – what was the motivation behind this?
Realising how difficult and expensive it was to access paywalled research got me quite frustrated with scholarly publishing. When I was doing my PhD I just could not understand why academics were stuck with a largely cumbersome and counter-intuitive system. The level of friction was killing my soul (it still does). It just seemed to me (now I understand better the larger issues) there was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way. I’ve said before that often scholarly publishing feels like that place where academic content goes to die: the end of the road. I feel publishing should be a point of departure, not the end.
FASE is one of the leading Open Access journals in the fields of Agricultural Engineering, Resources and Biotechnology, Animal Husbandry and Veterinary Medicine, Applied Ecology, Crop Science, Forestry Engineering and Fisheries, Horticulture, and Plant Protection.
By adding their content to ScienceOpen, they gain increased visibility through our platform and promotional services (like this article!), which increases its value amidst a heterogeneous global publishing market.
This cooperation between HEP and ScienceOpen helps to recognise the great work that Chinese publishers are doing to spearhead Open Access publishing, and our dual commitment to enhancing the visibility and impact of scholarly research in Engineering Science fields.
CEO of ScienceOpen Stephanie Dawson said “Open Access is a growing force in China, and we are happy to work with one of the leading publishers, Higher Education Press, to help increase the visibility of Chinese Open Access globally. We are pleased to use Frontiers of Agricultural Science and Engineering to launch this new partnership, as it publishes excellent research in a field addressing pressing issues such as food security in a changing world.”
The advantage of this for HEP is that they gain lots of additional traffic to their content. What publisher doesn’t want that? This means more downloads, and more re-use of the research they publish, which in turn increases the quality and prestige associated with the journal brand. You can track the attention of the Collection easily via reader count aggregates, and altmetric aggregates, as seen here, as well as other measures of re-use.
Researchers can now openly peer review and re-use their content too, which adds substantial value to both the research process and the journal brand again, which are both important in a scholarly publishing system that is becoming progressively more open. We’ll report the progress in these statistics again in a month so you can see the additional attention indexing with us generates!
The Collection contains some absolutely awesome papers too! Check these examples out:
We’re running a series to showcase some of the different perspectives in the scholarly publishing and communication world, and in particular regarding the theme of ‘Open Science’. We’ve already heard from Joanne Kamens about her work in making open data repositories and campaigning for greater diversity in STEM; Dan Shanahan discussed issues with the impact factor and assessment in academia; Gal Schkolnik let us know about her research into Shewanella and experiences with Open Access publishing; and Israel Bimpe described his story as a student from Rwanda and global health champion. So quite a mix, and it’s been great to get such a variety of thoughts, perspectives, and experiences.
But we’re not stopping there! We spoke to Iara Vidal who is working on her PhD in Information Science at the Federal University of Rio de Janeiro in Brazil, and has plenty of experience with altmetrics and also in working as a librarian. Here’s her story!
Hi Iara! So can you tell us a bit about your background to get things rolling?
Sure! I had my first experience with scientific research in high school. I was in what we call a “technical school” here in Brazil, studying to be a meteorological technician. In 1998 me and some other students did a study correlating rain levels with the incidence of certain diseases whose transmission is somehow related to water. It was great fun to go looking for all the data we needed, and we actually got a poster accepted at the 10th Brazillian Meteorology Conference (pdf is available here, if you’re curious and can read Portuguese – there’s a short English abstract but that’s it). That was my first scientific event – and honestly, conferences are probably my favourite aspect of academia to this day. For college, I changed from Meteorology into Library Science. I joined a research group in my university and kept presenting papers in small scientific events and student meetings. It was an amazing experience, but when I graduated in early 2005 I decided to go work in libraries instead of staying in academia. I *love* being a librarian, but things became difficult when, through reasons that are too complicated to explain here, I ended up as the sole librarian in a federal agency. Much as I tried, I could not improve my situation. So, in 2012, I decided to leave and pursue an academic career. I got my master’s degree in Information Science in 2014, and have been working on my PhD since 2015.
When did you first hear about open access and open science? What was your initial reaction?
I think I first heard about open access in the early 2000s, maybe in one of the Library and Information Science student meetings I used to go to. But it was only in the past few years that I got more involved in the issue. In 2013 I attended a conference celebrating the 15th anniversary of the SciELO Network (http://www.scielo15.org/en/about/), which got me really excited not only about open access, but also about the role of Latin America and other peripheral regions in all this. As I researched more about open access I got to know about open science as well. My reaction to all this was of excitement (hell yeah let’s free knowledge!), but also questioning: how do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation. I studied altmetrics in my master’s and am now moving to article-level metrics, but the end goal is improving evaluation.
How do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation.
Continuing our series on getting different perspectives into the field of ‘Open Science’, we spoke to Israel Bimpe, a pharmacy student at the School of Medicine and Pharmacy of the University of Rwanda, and a youth role model and champion for global health issues as the Vice President (and Chairperson of the African Regional Office) of the International Pharmaceutical Students’ Federation. He spoke to remind us that Open Science is a global challenge, and I hope this interview helps to illuminate that.
Hi Israel! Could you tell us a little bit about your background?
I’m a Pharmacy Student at the School of Medicine and Pharmacy of the University of Rwanda, doing my final year, which is just an internship, and very excited to graduate in July.
When did you first hear about ‘open science’ and ‘open access’? What were your first thoughts about them?
I heard about Open Science and Open Access during the 68th World Health Assembly in Geneva where I met a friend who was calling for more Open Science and Open Access in a random hostel lobby group discussion. It was so interesting that I got to check about it and even registered for the conference in Brussels (OpenCon), for which I was unfortunately not selected due to my limited knowledge probably.
Open Access would then be very important for students and medical professionals here in matters of accessing updated and accurate information
You are very passionate about global health equality and social justice. Where do you see open access and open science fitting into this?
I have been thinking about ways of linking up my passion to my interest in Open Science and Open Access, but still struggling to bridge them. I feel like my passion is more of civic engagement and interest on open science and open access more into education and science. Moreover, I rarely read/hear my Global Health and Social Justice Role model refer to Open Access and Open Science. It is quite still confusing for me.
We do struggle very much, as we can only access very old, non-accurate research.
How important is open access to students and medical professionals in Rwanda?
This not only applies to Rwanda but Africa in general in order to advance education and science, we need to access recent and up to date information. A practical example is that, I, in my whole education never used a paper published in the past two to five years. Excuses are usually that it’s expensive to access them and how easy it is to access the platforms. Open Access would then be very important for students and medical professionals here in matters of accessing updated and accurate information, as well Professional Development for a better practice.
Recently, Figshare also launched their pretty cool Collections feature, which is awesome in embracing the additional dimension of non-traditional research outputs with this concept. Figshare now joins ScienceOpen and Mendeley, among others, in recognising the value of thematic groups of digital objects, where the scope and content is defined by the research community, independent of journals and publishers.
ScienceOpen now has 175 Collections, each one representing a place to openly engage with research through peer review, discussion, sharing, and recommending. Each one is managed by a group of Editors or a single Editor, whose role is to assemble the Collection, curate it, and foster community engagement.
The value of this is twofold. Firstly, Editors create and manage a valuable resource for their communities, which anyone can openly contribute to. Secondly, this provides a platform to develop new skills for researchers: public peer review, community management, editorial control. Each of these is part of an essential and core skill-set for researchers.
If you would like to become a Collection Editor, simply shoot us an email at: Jon.Tennant@scienceopen.com, or tweet us at @science_open if that’s your preferred method (or just leave a comment here)! All it takes to become an Editor is your interest. We don’t exclude anyone, we just want to know who is building one so we can provide the best support possible!
We look forward to working with you and making science more open 🙂
Search engines form the core of discovery of research these days. There’s just too much information out there to search journal by journal or on a manual basis.
We highlighted in a previous post the advantages of using ScienceOpen’s dual-layered search and filter functions over others like Google Scholar. Today, we’re happy to announce that we just made it even better!
Say you want to search all of PeerJ’s content. Pop ‘PeerJ’ into the journal search, and it’ll come up with all their content, as it’s all indexed in PubMed. Hey presto, there you have 1530 papers, all with full texts attached. Neat eh! And that will update as more gets published with PeerJ, so you know what to do.
But that’s a lot of content. What you’ve just discovered is the PeerJ megajournal haystack. We want to filter out the needles.
The arXiv is a server that hosts ‘eprints’ or ‘preprints’ of research papers, and is a key publishing platform for many fields, particularly physics and mathematics. Founded back in 1991 by Paul Ginsparg, it currently hosts over 1 million research articles, with more than 8000 submissions per month!
Despite now being in the running for 25 years, the arXiv still represents one of the greatest technological innovations to utilise the Web for scholarly communication.
While the majority of the content submitted to the arXiv is subsequently also submitted to traditional journals for publication, there is still content which never goes beyond its confines. Irrespective of this, communities engaged with the arXiv still cite articles published there, whether or not they have been formally published in a journal elsewhere.
This is the whole purpose of the arXiv: to facilitate rapid peer-to-peer communication so that science accelerates faster. The fact that all articles are publicly available is incidental, and just happens to be a topic of major interest with the growing open access movement.
However, the arXiv is not peer reviewed in the formal sense. It is moderated, so that junk submissions can be removed, or manuscripts recategorised, but it lacks the additional layer of quality control of traditional peer review.
So while some might think this poses a risk, ask yourself this question: do you re-use articles critical to your research without making sure that you have checked and understand the research to a sufficient degree that you can appropriately cite it? Because that’s peer review, that is, and it applies irrespective of whether an article has already been peer reviewed or not.