Well, we’ve had some absolute stars recently in our ‘open science’ series! If you haven’t seen them yet, head over and check them out – such a diverse array of experiences and perspectives! Today we spoke with Josh King, the founder of Brevy. It’s an awesome new platform, and we’ll let Josh tell you more about it here, enjoy!
Hi Josh, thanks for joining us! Could you tell us a bit about why you started Brevy?
Brevy is an independent, volunteer group of a few stubborn individuals who work on the project during our off hours (read “nights and weekends”). While my own day job is in science outreach, I work with a couple of other partners (a fantastic computer science start-up owner and a behavioural psychologist make up our merry band) to help direct and maintain the site. We’re nothing special on our own, so the real stars here are those that pitch a hand adding summaries to Brevy or introducing it as class assignments to help grow the body of content!
When did you first hear about Open Access and Open Science? What did you first think?
That would likely be during my undergraduate years studying biochemistry and becoming hopelessly frustrated trying to write reports using papers I often had no access to (even with our university library!). At the time, I thought the concepts as fanciful dreams, but thankfully here we are with open access a growing paradigm and various open science platforms blossoming around the web.
What do you think the biggest problem with the current scholarly publishing system is?
Meaningful publishing. By reasonable estimates, at least more than a 1,000,000 academic papers are published each year. These works are published on platforms known largely only to academics, and then only to that specific subset of academia. Publications on these platforms are not always accessible even to this select group and generally do not well support further dialogue or dissemination, with a surprisingly significant number going uncited. Taken pessimistically, this is tantamount to ejecting hundreds of thousands of new pieces of knowledge into the void each year.
We can be optimistic about this however! Taken optimistically, there are hundreds of thousands of possibly exciting and ground-breaking new ideas all of the time that most of us don’t know about! But to see it this way, to truly believe it, we have to start caring about the meaningfulness of research. We have to start thinking about different types of impacts than citation count and means of prestige other than the journal name. And we have to care what our work means to the world outside academia.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.
We’re continuing our series on highlighting diverse perspectives in the vast field of ‘open science’. The last post in this series with Iara Vidal highlighted the opportunities of using altmetrics, as well as insight into scholarly publishing in Brazil. This week, Ernesto Priego talks with us about problems with the scholarly publishing system that led him to start his own journal, The Comics Grid.
There was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way.
Hi Ernesto! Thanks for joining us here. Could you start off by letting us know a little bit about your background?
I was born in Mexico City. I am Mexican and I have British nationality too. I studied English Literature at the National Autonomous University of Mexico (UNAM) where I also taught and was part of various research projects. I came to the UK to do a master’s in critical theory at UEA Norwich and a PhD in Information Studies at University College London. I currently teach Library and Information Science at City University London.
When did you first hear about open access and open science? What were your initial thoughts?
I cannot recall exactly. I think I first encountered the concept of ‘open access’ via Creative Commons. I was a keen blogger between 1999 and 2006, and I remember that around 2002 I first came across the concept of the ‘commons’. I think it was through Lawrence Lessig that I really got interested into how scholarly communications were incredibly restrictive in comparison to the ideas being discussed by the Free Culture movement. Lessig’s Free Culture (2004) changed things for me. (For more background I recently talked to Mike Taylor about why open access means so much to me in this interview).
We need to think about the greater good, not just about ourselves as individuals.
You run your own journal, The Comics Grid – what was the motivation behind this?
Realising how difficult and expensive it was to access paywalled research got me quite frustrated with scholarly publishing. When I was doing my PhD I just could not understand why academics were stuck with a largely cumbersome and counter-intuitive system. The level of friction was killing my soul (it still does). It just seemed to me (now I understand better the larger issues) there was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way. I’ve said before that often scholarly publishing feels like that place where academic content goes to die: the end of the road. I feel publishing should be a point of departure, not the end.
We’re running a series to showcase some of the different perspectives in the scholarly publishing and communication world, and in particular regarding the theme of ‘Open Science’. We’ve already heard from Joanne Kamens about her work in making open data repositories and campaigning for greater diversity in STEM; Dan Shanahan discussed issues with the impact factor and assessment in academia; Gal Schkolnik let us know about her research into Shewanella and experiences with Open Access publishing; and Israel Bimpe described his story as a student from Rwanda and global health champion. So quite a mix, and it’s been great to get such a variety of thoughts, perspectives, and experiences.
But we’re not stopping there! We spoke to Iara Vidal who is working on her PhD in Information Science at the Federal University of Rio de Janeiro in Brazil, and has plenty of experience with altmetrics and also in working as a librarian. Here’s her story!
Hi Iara! So can you tell us a bit about your background to get things rolling?
Sure! I had my first experience with scientific research in high school. I was in what we call a “technical school” here in Brazil, studying to be a meteorological technician. In 1998 me and some other students did a study correlating rain levels with the incidence of certain diseases whose transmission is somehow related to water. It was great fun to go looking for all the data we needed, and we actually got a poster accepted at the 10th Brazillian Meteorology Conference (pdf is available here, if you’re curious and can read Portuguese – there’s a short English abstract but that’s it). That was my first scientific event – and honestly, conferences are probably my favourite aspect of academia to this day. For college, I changed from Meteorology into Library Science. I joined a research group in my university and kept presenting papers in small scientific events and student meetings. It was an amazing experience, but when I graduated in early 2005 I decided to go work in libraries instead of staying in academia. I *love* being a librarian, but things became difficult when, through reasons that are too complicated to explain here, I ended up as the sole librarian in a federal agency. Much as I tried, I could not improve my situation. So, in 2012, I decided to leave and pursue an academic career. I got my master’s degree in Information Science in 2014, and have been working on my PhD since 2015.
When did you first hear about open access and open science? What was your initial reaction?
I think I first heard about open access in the early 2000s, maybe in one of the Library and Information Science student meetings I used to go to. But it was only in the past few years that I got more involved in the issue. In 2013 I attended a conference celebrating the 15th anniversary of the SciELO Network (http://www.scielo15.org/en/about/), which got me really excited not only about open access, but also about the role of Latin America and other peripheral regions in all this. As I researched more about open access I got to know about open science as well. My reaction to all this was of excitement (hell yeah let’s free knowledge!), but also questioning: how do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation. I studied altmetrics in my master’s and am now moving to article-level metrics, but the end goal is improving evaluation.
How do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation.
Last week, we kicked off a series interviewing some of the top ‘open scientists’ by interviewing Dr. Joanne Kamens of Addgene, and had a look at some of the great work she’d been doing in promoting a culture of data sharing, and equal opportunity for researchers. Today, we’ve got something completely different, with Daniel Shanahan of BioMed Central who recently published a really cool PeerJ paper on auto-correlation and the impact factor.
Hi Daniel! To start things off, can you tell us a bit about your background?
I completed a Master’s degree in Experimental and Theoretical Physics at University of Cambridge, but must admit I did my Master’s more to have an extra year to play rugby for the university, rather than a love of micro-colloidal particles and electron lasers. I have always loved science though and found my way into STM publishing, albeit from a slightly less than traditional route.
Eugene Garfield, one of the founders of biliometrics and scientometrics, once claimed that “Citation indexes resolve semantic problems associated with traditional subject indexes by using citation symbology rather than words to describe the content of a document.” This statement led to the advent and a new dawn of Web-based measurements of citations, implemented as a way to describe the academic re-use of research.
However, Garfield had only reached a partial solution to a problem about measuring re-use, as one of the major problems with citation counts is that they are primarily contextless: they don’t tell us anything about why research is being re-used. Nonetheless, citation counts are now at the very heart of academic systems for two main reasons:
They are fundamental for grant, hiring and tenure decisions.
They form the core of how we currently assess academic impact and prestige.
Working out article-level citation counts is actually pretty complicated though, and depends on where you’re sourcing your information from. If you read the last blog post here, you’ll have seen that search results between Google Scholar, Web of Science, PubMed, and Scopus all vary to quite some degree. Well, it is the same for citations too, and it comes down to what’s being indexed by each. Scopus indexes 12,850 journals, which is the largest documented number at the moment. PubMed on the other hand has 6000 journals comprising mostly clinical content, and Web of Science offers broader coverage with 8700 journals. However, unless you pay for both Web of Science and Scopus, you won’t be allowed to know who’s re-using work or how much, and even if you are granted access, both services offer inconsistent results. Not too useful when these numbers matter for impact assessment criteria and your career.
Google Scholar, however, offers a free citation indexing service, based, in theory, on all published journals, and possibly with a whole load of ‘grey literature’. For the majority of researchers now, Google Scholar is the go-to powerhouse search tool. Accompanying this power though is a whole web of secrecy: it is unknown who Google Scholar actually crawls, but you can bet they reach pretty far given by the amount of self-archived, and often illegally archived, content they return from searches. So the basis of their citation index is a bit of mystery and lacking any form of quality control, and confounded by the fact that it can include citations from non-peer-reviewed works, which will be an issue for some.
Academic citations represent the structured genealogy or network of an idea, and the association between themes or topics. I like to think that citation counts tell us how imperfect our knowledge is in a certain area, and how much researchers are working to change that. Researchers quite like citations; we like to know how many citations we’ve got, and who it is who’s citing and re-using our work. These two concepts are quite different: re-use can be reflected by a simple number, which is fine in a closed system. But to get a deeper context of how research is being re-used and to trace the genealogy of knowledge, you need openness.
At ScienceOpen, we have our own way to measure citations. We’ve recently implemented it, and are only just beginning to realise the importance of this metric. We’re calling it the Open Citation Index, and it represents a new way to measure the retrieval of scientific information.
But what is the Open Citation Index, and how is it calculated? The core of ScienceOpen is based on a huge corpus of open access articles drawn primarily from PubMed Central and arXiv. This forms about 2 million open access records, and each one comes with its own reference list. What we’ve done using a clever metadata extraction engine is to take each of these citations and create an article stub for them. These stubs, or metadata records, form the core of our citation network. The number of citations derived from this network are displayed on each article, and each item that cites another can be openly accessed from within our archive.
So the citation counts are based exclusively on open access publications, and therefore provide a pan-publisher, article-level measure of how ‘open’ your idea is. Based on the way these data are gathered, it also means that every article record has had at least one citation, and therefore we explicitly provide a level of cross-publisher content filtering. It is pertinent that we find ways to measure the effect of open access, and the Open Citation Index provides one way to do this. For researchers, the Open Citation Index is about gaining prestige in a system that is gradually, but inevitably and inexorably, moving towards ‘open’ as the default way of conducting research.
In the future, we will work with publishers to combine their content with our archives and enhance the Open Citation Index, developing a richer, increasingly transparent and more precise metric of how research is being re-used.
Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.
Student evaluations in teaching form a core part of our education system. However, there is little evidence to demonstrate that they are effective, or even work as they’re supposed to. This is despite such rating systems being used, studied and debated for almost a century.
A new analysis published in ScienceOpen Research offers evidence against the reliability of student evaluations in teaching, particularly as a measure of teaching effectiveness and for tenure or promotion decisions. In addition, the new study identified a bias against female instructors.
The new study by Anne Boring, Kellie Ottoboni, and Philip Stark (ScienceOpen Board Member) has already been picked up by several major news outlets including Inside Higher Education and Pacific Standard. This gives it an altmetric score of 54 (at the time of writing), which is the highest for any ScienceOpen Research paper to date!
As a newcomer on the Open Access publishing scene, ScienceOpen relies on the support of a wide range of academics. With this interview we would like to profile Advisory Board member Peter Suber (http://bit.ly/petersuber ) and share the valuable perspective he brings to our organization.