We publish from across the whole spectrum of research: Science, Technology, Engineering, Humanities, Mathematics, Social Sciences. Every piece of research deserves an equal chance to be published, irrespective of its field.
We also don’t discriminate based on the type of research. Original research, small-scale studies, opinion pieces, “negative” or null findings, review articles, data and software articles, case reports, and replication studies. We publish it all.
At ScienceOpen, we believe that the Journal Impact Factor (JIF) is a particularly poor way of measuring the impact of scholarly publishing. Furthermore, we think that it is a highly misleading metric for research assessment despite its widespread [mis-]use for this, and we strongly encourage researchers to adhere to the principles of DORA and the Leiden Manifesto.
This is why for our primary publication, ScienceOpen Research, we do not obtain or report the JIF. We provide article-level metrics and a range of other article aspects that provide and enhance the context of each article, and extend this to all 25 million research articles on our platform.
Further reading
A simple proposal for the publication of journal citation distributions (link)
How can academia kick its addiction to the impact factor (link)
Continuing our ‘open science stars’ series, we’re happy to present Dr. Julien Colomb this week! Julien is a postdoc in Berlin, and we’ve been working together (well, Julien has tolerated my presence..) at Open Science meetups here, which he’s been using to build an active community over the last 10 months or so. He recently published a cool paper in PeerJ and built a new ScienceOpen Collection, so we asked for his thoughts and experience with Open Science!
Hi Julien! Thanks for joining us at the ScienceOpen blog. Could you start off by letting us know a bit about your background?
Hi John. My pleasure to be here. [We’ve known each other for a year and he still can’t spell my name..]
I have been interested in neurobiology since my high school time; I got to work with Drosophila during my Master’s thesis and could then not leave the field. I worked about 10 years on the neuroanatomy and behaviour in the fruit fly larvae and flies in Switzerland, Paris and Berlin. In 2013, I decided to stay in Berlin when the mentor of my second post-doc, Prof. Brembs, moved to Regensburg. In the last 3 years, I have been jumping between different jobs in Prof. Winter groups, I have been wandering in the startup community in Berlin (founding Drososhare GmbH), and trying to foster open science and open data. At the moment, I work half time at the Charite animal outcome core facility, while we work on getting a beta version of the Drososhare product (a platform to share transgenic Drosophila between scientists). I also run the Berlin Open Science Meetup.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
Oh, I didn’t realise it was that simple. Problem solved.
Eventually, we agreed on the point that finding a universal solution to impact factor mis-use is difficult. If it were so easy, there’d be start-ups stepping in to capitalise on it!
@Protohedgehog If I had that answer I'd probably be helming a publishing startup.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.
Eugene Garfield, one of the founders of biliometrics and scientometrics, once claimed that “Citation indexes resolve semantic problems associated with traditional subject indexes by using citation symbology rather than words to describe the content of a document.” This statement led to the advent and a new dawn of Web-based measurements of citations, implemented as a way to describe the academic re-use of research.
However, Garfield had only reached a partial solution to a problem about measuring re-use, as one of the major problems with citation counts is that they are primarily contextless: they don’t tell us anything about why research is being re-used. Nonetheless, citation counts are now at the very heart of academic systems for two main reasons:
They are fundamental for grant, hiring and tenure decisions.
They form the core of how we currently assess academic impact and prestige.
Working out article-level citation counts is actually pretty complicated though, and depends on where you’re sourcing your information from. If you read the last blog post here, you’ll have seen that search results between Google Scholar, Web of Science, PubMed, and Scopus all vary to quite some degree. Well, it is the same for citations too, and it comes down to what’s being indexed by each. Scopus indexes 12,850 journals, which is the largest documented number at the moment. PubMed on the other hand has 6000 journals comprising mostly clinical content, and Web of Science offers broader coverage with 8700 journals. However, unless you pay for both Web of Science and Scopus, you won’t be allowed to know who’s re-using work or how much, and even if you are granted access, both services offer inconsistent results. Not too useful when these numbers matter for impact assessment criteria and your career.
Cartoonstock is the source of Hagen Cartoons’ Struggling scientists.
Google Scholar, however, offers a free citation indexing service, based, in theory, on all published journals, and possibly with a whole load of ‘grey literature’. For the majority of researchers now, Google Scholar is the go-to powerhouse search tool. Accompanying this power though is a whole web of secrecy: it is unknown who Google Scholar actually crawls, but you can bet they reach pretty far given by the amount of self-archived, and often illegally archived, content they return from searches. So the basis of their citation index is a bit of mystery and lacking any form of quality control, and confounded by the fact that it can include citations from non-peer-reviewed works, which will be an issue for some.
Academic citations represent the structured genealogy or network of an idea, and the association between themes or topics. I like to think that citation counts tell us how imperfect our knowledge is in a certain area, and how much researchers are working to change that. Researchers quite like citations; we like to know how many citations we’ve got, and who it is who’s citing and re-using our work. These two concepts are quite different: re-use can be reflected by a simple number, which is fine in a closed system. But to get a deeper context of how research is being re-used and to trace the genealogy of knowledge, you need openness.
At ScienceOpen, we have our own way to measure citations. We’ve recently implemented it, and are only just beginning to realise the importance of this metric. We’re calling it the Open Citation Index, and it represents a new way to measure the retrieval of scientific information.
But what is the Open Citation Index, and how is it calculated? The core of ScienceOpen is based on a huge corpus of open access articles drawn primarily from PubMed Central and arXiv. This forms about 2 million open access records, and each one comes with its own reference list. What we’ve done using a clever metadata extraction engine is to take each of these citations and create an article stub for them. These stubs, or metadata records, form the core of our citation network. The number of citations derived from this network are displayed on each article, and each item that cites another can be openly accessed from within our archive.
Visualising citation networks: pretty, but complex. (Source)
So the citation counts are based exclusively on open access publications, and therefore provide a pan-publisher, article-level measure of how ‘open’ your idea is. Based on the way these data are gathered, it also means that every article record has had at least one citation, and therefore we explicitly provide a level of cross-publisher content filtering. It is pertinent that we find ways to measure the effect of open access, and the Open Citation Index provides one way to do this. For researchers, the Open Citation Index is about gaining prestige in a system that is gradually, but inevitably and inexorably, moving towards ‘open’ as the default way of conducting research.
In the future, we will work with publishers to combine their content with our archives and enhance the Open Citation Index, developing a richer, increasingly transparent and more precise metric of how research is being re-used.
Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.
Impressions of the 25th European Students’ Conference 2014 in Berlin
Over the last few months I’ve had the privilege of chatting to many young researchers from different areas of science. Last week, I was delighted to attend the 25th European Students’ Conference 2014 in Berlin where I had been invited to organize an afternoon workshop entitled Perspectives on Scientific Publishing with about 100 participants. It was terrific to spend almost three hours with so many students which were keen to find out more about the future of scholarly communication.
My interest in this topic was sparked by a previous panel discussion on scholarly publishing when I observed that a significant part of the audience were Ph.D. students or post-docs. When one of the speakers talked about new opportunities in Open Access publishing, a very intensive discussion began. Almost all the young scientists in the audience were excited and motivated by the principles and vision behind Open Access. They said they would like to change the current publishing system and participate in a more open conversation about their research with peers. I was thrilled because that is what we are trying to develop at ScienceOpen.
However, “If I publish my work Open Access, I will have difficulties in my future career, I am afraid, because I need the highest Impact Factor (IF) possible” said one of the young scholars, dampening the enthusiasm, and in the end most of his colleagues agreed.
“If I publish my work Open Access, I will have difficulties in my future career, I am afraid, because I need the highest Impact Factor (IF) possible.”
But how real is this risk for junior faculty who will have the most important impact on the future of academia? To find out more about the perspectives of grad students and junior researchers at institutions or universities, I tried to find arguments against active participation in Open Access publishing. Although younger researchers would like to have a public discussion about their science with their peers, almost everyone I talked to stressed that they have been instructed by their academic senior advisor to aim for a high-IF journal to publish their work. And most young scientists had the impression that there are relatively few quality Open Access journals and even many of these have a low IF, if any. Therefore I next asked some of their supervisors and professors for their thoughts. Amazingly, many of them emphasized that their graduate students and junior researchers themselves insisted on publishing in a “Champions League” journal, or at least, in a “Premiere League” journal with a high IF.
Who was right? I believe that we don’t need to answer this question in order to understand why young researchers are wary of Open Access publishing opportunities.
Let’s summarize the major reasons that motivate a researcher to publish her/his work:
(A) To record and archive results.
(B) To share new findings with colleagues.
(C) To receive feedback from experts / peers.
(D) To get recognition by the scientific community.
(E) To report results to the public, funding bodies, and others.
Next, let us analyze which reasons for publishing are more relevant to young researchers in comparison with others. Reporting results (E) is a more formal reason which is required when one has received a financial contribution by funding organizations. As for archiving (A), it is not a particular motivation for junior scientists. By contrast, sharing with colleagues (B) may have more significance for those groups that have just started to build up their academic network. We all agree that younger scientists must not only actively promote themselves by sharing new results of their work, but also to intensify dialogue with their peers. They therefore also depend on feedback from experts and peers (C) much more than a senior researcher who has established his or her expertise across decades. Both (B) and (C) will hopefully result in recognition from the scientific community and (D) has long been considered the conditio sine qua non in academia for all junior researchers if they want a successful academic career. Everyone I talked to agreed and most of my scholarly colleagues confirmed that this list appeared to be consistent and complete in describing the relevance of publishing for young researchers.
But where are the Impact Factors in my list? Where are big journal brands?
“But where are the Impact Factors in my list? Where are big journal brands?”
Until relatively recently, recognition has been largely measured by citations. Today, with more frequent usage of social networks, we should broaden our view and associate credit for scientific work also with mentions, likes, or retweets. The latter attributes of modern communication in social networks is an immediate and uniquely fast way to provide and earn credit in scholarly publishing. There are an ever increasing number of examples where an excellent paper was recognized within minutes after it had been published Open Access. Citations are important, but it is the article and the individuals who authored that work which should get credited. And there is growing evidence that papers published Open Access are read and ultimately cited more often. Impact factor is a “toxic influence” on science, as Randy Shekman, Nobel laureate and founder of eLife recently stated,.
“Impact factor is a “toxic influence” on science.”
Finally, we do not need big journal brands or an Impact Factor to evaluate the relevance and quality of research. Neither for senior scientists, nor for young researchers. The latter group, however, has a significant intrinsic advantage: they are much more accustomed to communicating with social media tools. If they continue to use these when starting their academic career, they will strongly influence traditional, old-fashioned ways of crediting academic research.
My conclusion can therefore be considered as an invitation to the younger generation of researchers:
Substitute pay-walled journals with new open science technologies to publicly publish your scientific results
Continue to use social network tools to communicate about and discuss recent research with others
Adopt alternative metrics to measure scientific relevance in addition to classical citation
Liz Allen, who works with me at ScienceOpen, also recently wrote this blog post to encourage younger researchers to be part of the open scientific conversation and suggested different ways for them to get involved.
It will be your generation in a decade from now that will craft the careers of other young researchers. Nobody else. Therefore you should not be afraid of publishing Open Access or submitting your next paper to an alternative open science platform. The more people like you who follow that path of modern scholarly publishing, the less emphasis will be put on classical incentives for academic evaluation. Open Access and active communication about new results in science by social media and open science platforms, such as ScienceOpen, can increase both usage and impact for your work.
“We do not need big journal brands or an Impact Factor to evaluate the relevance and quality of research.”
And my request to senior scientists who are presently judging the quality of the younger generation of researchers: challenge yourself to look at their social networking record and their willingness to shape the new measures of recognition. And do not forget: Access is not a sufficient condition for citation, but it is a necessary one. Open Access dramatically increases the number of potential users of any given article by adding those users who would otherwise have been unable to access it, as Stevan Harnad and Tim Brody demonstrated already 10 years ago. Give the pioneers a chance – they are the future of research!
“Give the pioneers a chance – they are the future of research.”
David Black is Secretary General of the International Council for Science (ICSU) and Professor of Organic Chemistry at the University of New South Wales, Australia. An advocate of Open Access for scientific data in his role at ICSU, Professor Black is a proponent of the initiatives of ICSU and ICSU-affiliate groups, such as the Committee on Freedom and Responsibility in the Conduct of Science (CFRS), the ICSU-World Data System (ICSU-WDS), the International Council for Scientific and Technical Information (ICSTI), the ICSU’s Strategic Coordinating Committee on Information and Data (SCCID), Continue reading “ScienceOpen Interview with David Black, Secretary General, International Council for Science.”
As a newcomer on the Open Access publishing scene, ScienceOpen relies on the support of a wide range of academics. With this interview we would like to profile Advisory Board member Peter Suber (http://bit.ly/petersuber ) and share the valuable perspective he brings to our organization.
Today’s author interview comes from Carol Perez-Iratxeta ( http://goo.gl/fwloa7 ), a bioinformatics researcher based at the Ottawa Hospital Research Institute (OHRI) in Ottawa, Canada. Her research concerns data mining and computational genomic analysis applied to human disease.
ScienceOpen continues our series of interviews with our new authors with Professor Lorenzo Iorio (https://www.scienceopen.com/profile/lorenzo_iorio ), who has just published an article on ScienceOpen entitled „Orbital effects of a monochromatic plane gravitational wave with ultra-low frequency incident on a gravitationally bound two-body system.“ ( http://goo.gl/kCYgwd )