As part of our ongoing development of ScienceOpen 2.017, we have designed an exciting and most importantly, pretty, new context-enhanced webpage for each of our 27 million article records. Such enriched article metadata is becoming increasingly important in defining the context of research in the evolution of scholarly communication, in which we are moving away from journal- to article-level evaluation.
Statistically significant upgrades
All of the statistics have been moved to the top of the page, including the number of page views or readers, the Altmetric score, the number of recommendations, and the number of social media shares.
Newly featured statistics include the top references cited within, the top articles citing that paper, and the number of similar articles based on keywords and topics. These new features are great for authors as content creators, researchers as users, as well as publishers for understanding the popularity and context of research they publish.
We publish from across the whole spectrum of research: Science, Technology, Engineering, Humanities, Mathematics, Social Sciences. Every piece of research deserves an equal chance to be published, irrespective of its field.
We also don’t discriminate based on the type of research. Original research, small-scale studies, opinion pieces, “negative” or null findings, review articles, data and software articles, case reports, and replication studies. We publish it all.
At ScienceOpen, we believe that the Journal Impact Factor (JIF) is a particularly poor way of measuring the impact of scholarly publishing. Furthermore, we think that it is a highly misleading metric for research assessment despite its widespread [mis-]use for this, and we strongly encourage researchers to adhere to the principles of DORA and the Leiden Manifesto.
This is why for our primary publication, ScienceOpen Research, we do not obtain or report the JIF. We provide article-level metrics and a range of other article aspects that provide and enhance the context of each article, and extend this to all 25 million research articles on our platform.
A simple proposal for the publication of journal citation distributions (link)
How can academia kick its addiction to the impact factor (link)
Free to publish Open Access journals offer an incredible service to the research community and broader public, with editors often working long hours with no compensation. We want to recognise this effort and reward it with free indexing on our platform!
More visibility for your journal
Journals indexed on ScienceOpen:
Reach new audiences and maximize your readership
Drive more usage to your journals
Upload your content to a unique search/discovery and communication platform
Open up the context of your content
What do we need from you?
An application form can be found here. Fill it out, and submit to our team. Simple!
On the last day of every month, we will select and announce the winners via social media, and begin the next cycle! Out of the applicants, we will select up to 10 journals per month for free indexing, and the best application will get a free featuredjournal collection too! All others will roll over into the next month.
At ScienceOpen, we’re constantly upgrading our platform to provide the best possible user interaction experience. We get feedback from the research community all the time, and try to adapt to best meet their needs.
So today, we’re happy to announce two neat little features in our latest updates.
Firstly, all Open Access articles now have a cute little symbol next to them, making it even easier for you to discover open content. This shows up on all of our Open Access content across nearly 14 million article records now. Making open content stand out is a great way to encourage others to adopt open practices, as well as help people see which content they can re-use most easily.
As well as this, we have a new browsing function built into our collections. Sometimes, collections are pretty big. Our new SciELO collections have some with tens of thousands of open access articles, and sifting through that manually is not exactly a valuable use of ones time.
With this new function, you can now filter content within collections by journal, publisher, keywords, and even filter them by citations or Altmetric scores. Discovering content relevant to your research should be smart and efficient, and this is what our platform delivers. Try it out on this collection, or build your own!
Context is something we’ve been thinking a lot about at ScienceOpen recently. It comes from the Latin ‘con’ and ‘texere’ (to form ‘contextus’), which means ‘weave together’. The implications for science are fairly obvious: modern research is about weaving together different strands of information, thought, and data to place your results into the context of existing research. This is the reason why we have introductory and discussion sections at the intra-article level.
But what about context at a higher level?
Context can defined as: “The circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.” Simple follow on questions might be then, what is the context of a research article? How do we define that context? How do we build on that to do science more efficiently? The whole point for the existence of research articles is that they can be understood by as broad an audience as possible so that their re-use is maximised.
There are many things that impinge upon the context of research. Paywalls, secretive and exclusive peer review, lack of discovery, lack of inter-operability, lack of accessibility. The list is practically endless, and a general by-product of a failure for traditional scholarly publishing models to embrace a Web-based era.
Well, we’ve had some absolute stars recently in our ‘open science’ series! If you haven’t seen them yet, head over and check them out – such a diverse array of experiences and perspectives! Today we spoke with Josh King, the founder of Brevy. It’s an awesome new platform, and we’ll let Josh tell you more about it here, enjoy!
Hi Josh, thanks for joining us! Could you tell us a bit about why you started Brevy?
Brevy is an independent, volunteer group of a few stubborn individuals who work on the project during our off hours (read “nights and weekends”). While my own day job is in science outreach, I work with a couple of other partners (a fantastic computer science start-up owner and a behavioural psychologist make up our merry band) to help direct and maintain the site. We’re nothing special on our own, so the real stars here are those that pitch a hand adding summaries to Brevy or introducing it as class assignments to help grow the body of content!
When did you first hear about Open Access and Open Science? What did you first think?
That would likely be during my undergraduate years studying biochemistry and becoming hopelessly frustrated trying to write reports using papers I often had no access to (even with our university library!). At the time, I thought the concepts as fanciful dreams, but thankfully here we are with open access a growing paradigm and various open science platforms blossoming around the web.
What do you think the biggest problem with the current scholarly publishing system is?
Meaningful publishing. By reasonable estimates, at least more than a 1,000,000 academic papers are published each year. These works are published on platforms known largely only to academics, and then only to that specific subset of academia. Publications on these platforms are not always accessible even to this select group and generally do not well support further dialogue or dissemination, with a surprisingly significant number going uncited. Taken pessimistically, this is tantamount to ejecting hundreds of thousands of new pieces of knowledge into the void each year.
We can be optimistic about this however! Taken optimistically, there are hundreds of thousands of possibly exciting and ground-breaking new ideas all of the time that most of us don’t know about! But to see it this way, to truly believe it, we have to start caring about the meaningfulness of research. We have to start thinking about different types of impacts than citation count and means of prestige other than the journal name. And we have to care what our work means to the world outside academia.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.
We’re continuing our series on highlighting diverse perspectives in the vast field of ‘open science’. The last post in this series with Iara Vidal highlighted the opportunities of using altmetrics, as well as insight into scholarly publishing in Brazil. This week, Ernesto Priego talks with us about problems with the scholarly publishing system that led him to start his own journal, The Comics Grid.
There was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way.
Hi Ernesto! Thanks for joining us here. Could you start off by letting us know a little bit about your background?
I was born in Mexico City. I am Mexican and I have British nationality too. I studied English Literature at the National Autonomous University of Mexico (UNAM) where I also taught and was part of various research projects. I came to the UK to do a master’s in critical theory at UEA Norwich and a PhD in Information Studies at University College London. I currently teach Library and Information Science at City University London.
When did you first hear about open access and open science? What were your initial thoughts?
I cannot recall exactly. I think I first encountered the concept of ‘open access’ via Creative Commons. I was a keen blogger between 1999 and 2006, and I remember that around 2002 I first came across the concept of the ‘commons’. I think it was through Lawrence Lessig that I really got interested into how scholarly communications were incredibly restrictive in comparison to the ideas being discussed by the Free Culture movement. Lessig’s Free Culture (2004) changed things for me. (For more background I recently talked to Mike Taylor about why open access means so much to me in this interview).
We need to think about the greater good, not just about ourselves as individuals.
You run your own journal, The Comics Grid – what was the motivation behind this?
Realising how difficult and expensive it was to access paywalled research got me quite frustrated with scholarly publishing. When I was doing my PhD I just could not understand why academics were stuck with a largely cumbersome and counter-intuitive system. The level of friction was killing my soul (it still does). It just seemed to me (now I understand better the larger issues) there was no real reason to not start your own journal as an academic, to regain control of our own work and to create, disseminate and engage with scholarship in a faster, more transparent, fairer way. I’ve said before that often scholarly publishing feels like that place where academic content goes to die: the end of the road. I feel publishing should be a point of departure, not the end.
We’re running a series to showcase some of the different perspectives in the scholarly publishing and communication world, and in particular regarding the theme of ‘Open Science’. We’ve already heard from Joanne Kamens about her work in making open data repositories and campaigning for greater diversity in STEM; Dan Shanahan discussed issues with the impact factor and assessment in academia; Gal Schkolnik let us know about her research into Shewanella and experiences with Open Access publishing; and Israel Bimpe described his story as a student from Rwanda and global health champion. So quite a mix, and it’s been great to get such a variety of thoughts, perspectives, and experiences.
But we’re not stopping there! We spoke to Iara Vidal who is working on her PhD in Information Science at the Federal University of Rio de Janeiro in Brazil, and has plenty of experience with altmetrics and also in working as a librarian. Here’s her story!
Hi Iara! So can you tell us a bit about your background to get things rolling?
Sure! I had my first experience with scientific research in high school. I was in what we call a “technical school” here in Brazil, studying to be a meteorological technician. In 1998 me and some other students did a study correlating rain levels with the incidence of certain diseases whose transmission is somehow related to water. It was great fun to go looking for all the data we needed, and we actually got a poster accepted at the 10th Brazillian Meteorology Conference (pdf is available here, if you’re curious and can read Portuguese – there’s a short English abstract but that’s it). That was my first scientific event – and honestly, conferences are probably my favourite aspect of academia to this day. For college, I changed from Meteorology into Library Science. I joined a research group in my university and kept presenting papers in small scientific events and student meetings. It was an amazing experience, but when I graduated in early 2005 I decided to go work in libraries instead of staying in academia. I *love* being a librarian, but things became difficult when, through reasons that are too complicated to explain here, I ended up as the sole librarian in a federal agency. Much as I tried, I could not improve my situation. So, in 2012, I decided to leave and pursue an academic career. I got my master’s degree in Information Science in 2014, and have been working on my PhD since 2015.
When did you first hear about open access and open science? What was your initial reaction?
I think I first heard about open access in the early 2000s, maybe in one of the Library and Information Science student meetings I used to go to. But it was only in the past few years that I got more involved in the issue. In 2013 I attended a conference celebrating the 15th anniversary of the SciELO Network (http://www.scielo15.org/en/about/), which got me really excited not only about open access, but also about the role of Latin America and other peripheral regions in all this. As I researched more about open access I got to know about open science as well. My reaction to all this was of excitement (hell yeah let’s free knowledge!), but also questioning: how do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation. I studied altmetrics in my master’s and am now moving to article-level metrics, but the end goal is improving evaluation.
How do we get people to change their behaviour? I think the answer lies in incentives, which increased my interest in research evaluation.
Last week, we kicked off a series interviewing some of the top ‘open scientists’ by interviewing Dr. Joanne Kamens of Addgene, and had a look at some of the great work she’d been doing in promoting a culture of data sharing, and equal opportunity for researchers. Today, we’ve got something completely different, with Daniel Shanahan of BioMed Central who recently published a really cool PeerJ paper on auto-correlation and the impact factor.
Hi Daniel! To start things off, can you tell us a bit about your background?
I completed a Master’s degree in Experimental and Theoretical Physics at University of Cambridge, but must admit I did my Master’s more to have an extra year to play rugby for the university, rather than a love of micro-colloidal particles and electron lasers. I have always loved science though and found my way into STM publishing, albeit from a slightly less than traditional route.