Blog
About

Tag: altmetrics

Open Science Stars: An interview with Daniel Shanahan

Last week, we kicked off a series interviewing some of the top ‘open ​scientists’ by interviewing Dr. Joanne Kamens of Addgene, and had a look at some of the great work she’d been doing in promoting a culture of data sharing, and equal opportunity for researchers. Today, we’ve got something completely different, with Daniel Shanahan of BioMed Central who recently published a really cool PeerJ paper on auto-correlation and the impact factor.

Hi Daniel! To start things off, can you tell us a bit about your background?

I completed a Master’s degree in Experimental and Theoretical Physics at University of Cambridge, but must admit I did my Master’s more to have an extra year to play rugby for the university, rather than a love of micro-colloidal particles and electron lasers. I have always loved science though and found my way into STM publishing, albeit from a slightly less than traditional route.

Continue reading “Open Science Stars: An interview with Daniel Shanahan”  

The Open Citation Index

Eugene Garfield, one of the founders of biliometrics and scientometrics, once claimed that “Citation indexes resolve semantic problems associated with traditional subject indexes by using citation symbology rather than words to describe the content of a document.” This statement led to the advent and a new dawn of Web-based measurements of citations, implemented as a way to describe the academic re-use of research.

However, Garfield had only reached a partial solution to a problem about measuring re-use, as one of the major problems with citation counts is that they are primarily contextless: they don’t tell us anything about why research is being re-used. Nonetheless, citation counts are now at the very heart of academic systems for two main reasons:

  • They are fundamental for grant, hiring and tenure decisions.
  • They form the core of how we currently assess academic impact and prestige.

Working out article-level citation counts is actually pretty complicated though, and depends on where you’re sourcing your information from. If you read the last blog post here, you’ll have seen that search results between Google Scholar, Web of Science, PubMed, and Scopus all vary to quite some degree. Well, it is the same for citations too, and it comes down to what’s being indexed by each. Scopus indexes 12,850 journals, which is the largest documented number at the moment. PubMed on the other hand has 6000 journals comprising mostly clinical content, and Web of Science offers broader coverage with 8700 journals. However, unless you pay for both Web of Science and Scopus, you won’t be allowed to know who’s re-using work or how much, and even if you are granted access, both services offer inconsistent results. Not too useful when these numbers matter for impact assessment criteria and your career.

struggling-scientist
Cartoonstock is the source of Hagen Cartoons’ Struggling scientists.

Google Scholar, however, offers a free citation indexing service, based, in theory, on all published journals, and possibly with a whole load of ‘grey literature’. For the majority of researchers now, Google Scholar is the go-to powerhouse search tool. Accompanying this power though is a whole web of secrecy: it is unknown who Google Scholar actually crawls, but you can bet they reach pretty far given by the amount of self-archived, and often illegally archived, content they return from searches. So the basis of their citation index is a bit of mystery and lacking any form of quality control, and confounded by the fact that it can include citations from non-peer-reviewed works, which will be an issue for some.

Academic citations represent the structured genealogy or network of an idea, and the association between themes or topics. I like to think that citation counts tell us how imperfect our knowledge is in a certain area, and how much researchers are working to change that. Researchers quite like citations; we like to know how many citations we’ve got, and who it is who’s citing and re-using our work. These two concepts are quite different: re-use can be reflected by a simple number, which is fine in a closed system. But to get a deeper context of how research is being re-used and to trace the genealogy of knowledge, you need openness.

At ScienceOpen, we have our own way to measure citations. We’ve recently implemented it, and are only just beginning to realise the importance of this metric. We’re calling it the Open Citation Index, and it represents a new way to measure the retrieval of scientific information.

But what is the Open Citation Index, and how is it calculated? The core of ScienceOpen is based on a huge corpus of open access articles drawn primarily from PubMed Central and arXiv. This forms about 2 million open access records, and each one comes with its own reference list. What we’ve done using a clever metadata extraction engine is to take each of these citations and create an article stub for them. These stubs, or metadata records, form the core of our citation network. The number of citations derived from this network are displayed on each article, and each item that cites another can be openly accessed from within our archive.

citation_network
Visualising citation networks: pretty, but complex. (Source)

So the citation counts are based exclusively on open access publications, and therefore provide a pan-publisher, article-level measure of how ‘open’ your idea is. Based on the way these data are gathered, it also means that every article record has had at least one citation, and therefore we explicitly provide a level of cross-publisher content filtering. It is pertinent that we find ways to measure the effect of open access, and the Open Citation Index provides one way to do this. For researchers, the Open Citation Index is about gaining prestige in a system that is gradually, but inevitably and inexorably, moving towards ‘open’ as the default way of conducting research.

In the future, we will work with publishers to combine their content with our archives and enhance the Open Citation Index, developing a richer, increasingly transparent and more precise metric of how research is being re-used.

In:  Altmetrics  

The relationship between journal rejections and their impact factors

Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work.

Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.

Continue reading “The relationship between journal rejections and their impact factors”  

In:  Research  

New ScienceOpen study on the effectiveness of student evaluations of teaching highlights gender bias against female instructors

Student evaluations in teaching form a core part of our education system. However, there is little evidence to demonstrate that they are effective, or even work as they’re supposed to. This is despite such rating systems being used, studied and debated for almost a century.

A new analysis published in ScienceOpen Research offers evidence against the reliability of student evaluations in teaching, particularly as a measure of teaching effectiveness and for tenure or promotion decisions. In addition, the new study identified a bias against female instructors.

The new study by Anne Boring, Kellie Ottoboni, and Philip Stark (ScienceOpen Board Member) has already been picked up by several major news outlets including Inside Higher Education and Pacific Standard. This gives it an altmetric score of 54 (at the time of writing), which is the highest for any ScienceOpen Research paper to date!

Continue reading “New ScienceOpen study on the effectiveness of student evaluations of teaching highlights gender bias against female instructors”  

Interview with Advisory Board member Peter Suber

As a newcomer on the Open Access publishing scene, ScienceOpen relies on the support of a wide range of academics. With this interview we would like to profile Advisory Board member Peter Suber (http://bit.ly/petersuber ) and share the valuable perspective he brings to our organization.

One of the original founders of the Open Access movement, Peter Suber is currently director of the Harvard Office for Scholarly Communication ( https://osc.hul.harvard.edu/ ) and the Harvard Open Access Project ( http://cyber.law.harvard.edu/hoap ). His latest book, Continue reading “Interview with Advisory Board member Peter Suber”  

Next page  
12