The
Challenge: Academic publishing is currently in a transition phase
to a fully digital industry. It faces the pressures and challenges of
establishing new business models, products and reputation structures. The cost of
innovating is especially high for smaller participants.
The Solution: Discovery is key in the digital space. ScienceOpen offers unique technologies for academic publishers to create, host and promote their journals and books embedded within a freely-accessible discovery environment with next-generation metrics and curation tools for reputation management and dissemination. We work closely with some of the leading publishers in the field to develop individual solutions for their content.
ScienceOpen
has a wide range of packages and customizable services so we have put together
a short overview here. Contact
us to find out more about what would be a good fit for your program.
“Search is the new journal!”, was one of the rallying cries at the recent Force11 meeting in Berlin. But what does this mean? Well, we have a bit of a problem in research – there is so much content being published these days, about 2-3 million papers each year from around 50,000 journals! It has never been more crucial to have efficient ways of searching to discover relevant work for your research question. No single human is capable of this alone.
Now, we know Google Scholar is usually everyone’s search engine of choice for research articles. But when you pop in a search term, how do you know what research is good, what’s relevant to you, what people are talking about? You just get an enormous list that trails off with ever-decreasing relevance, and are supposed to be able to figure that all out yourself. We can do better.
Quality and quantity
Efficient search is the core issue that our freely accessible multi-layer discovery engine is helping to solve. The current database at ScienceOpen has more than 36 million article records, and growing at around 100,000 new records each week. Each of these records is linked within the database to other articles through our open citation network.
Smart search – because it’s 2017! (click to learn more)
We use this citation information, and other article metadata, to provide an enriched search ecosystem for users. The purpose of this is to allow users to drill down to relevant research using a range of different contexts and criteria, saving time and energy, and facilitating research discovery at multiple dimensions.
Sort by citation count
Citations are still one of the main forms of ‘academic’ currency in a modern research world. Citations only measure how many times a piece of work has been cited without additional context. As such, they are a simple proxy for ‘scholarly discussion’ of a piece of work, but beyond this are essentially devoid of legitimacy as a metric.
Sorting a search result by citations allows you to see what is most popular in a research context, and which articles have been particularly important in developing new disciplines, ideas, and ways of thinking. Identifying highly-cited articles provides for you a great starting point for further discovery. Citations reveal to you the lineage of ideas – start at the top, and work your way down! Understanding the historical context of ideas is critical for good research, and ScienceOpen helps you to explore this.
Sort by Altmetric score
Altmetric scores are a combined measure of social attention for articles. They give us a nice idea of how much an article is being discussed in news outlets or on social media. If you want to keep up with the buzz in your field, or find out what’s of interest in another, ScienceOpen gives you the tools for that.
ScienceOpen and Altmetric are pleased to co-host a webinar at 4th July at 3:00 PM – 4:00 PM BST titled: “The future of altmetrics”. Register here.
Altmetrics are non-traditional metrics that can be used as alternative measures of scholarly impact. As an article-level metric, they contain information about how research is shared and re-used in a digital environment, such as mentions in tweets, blogs, or Wikipedia pages. They are becoming increasingly important for researchers as they offer a much richer understanding of how their research is being used by broader communities.
For this one-hour long webinar, we have a fantastic panel of expert speakers for you!
James Wilsdon of Sheffield University
Stephanie Dawson, CEO of ScienceOpen
Euan Adie, Founder of Altmetric
James Wilsdon is the Chair of the Expert Group leading the consultation, and Professor of Research Policy and Director of Impact and Engagement in the Faculty of Social Sciences at the University of Sheffield. Since 2013, he has been Chair of the UK’s Campaign for Social Science, and recently chaired an independent review of the role of metrics in the management of the UK’s research system, which published its final report The Metric Tide in July 2015.
From 2001-2012 Stephanie Dawson worked in various positions at the academic publisher De Gruyter in Berlin in the fields of biology and chemistry in both journals and book publishing. In 2013 she took on the role of managing director for ScienceOpen GmbH in Berlin. She worked at the Fred Hutchinson Cancer Research Center in Seattle. and at Ralph Rupp at the Friedrich Miescher Laboratory, Tübingen, Germany, before changing fields and getting a PhD in German Literature from the University of Washington.
Euan Adie founded Altmetric in 2011 out of the growing altmetrics movement. Altmetric is a Digital Science company based in London specialising in tracking and analysing the online activity around scholarly research outputs for researchers, institutes and publishers. Euan had previously worked on Postgenomic.com, an open source scientific blog aggregator founded in 2006.
Our experts will cover the technical, political, and practical implications of altmetrics and the development of next-generation metrics.
Registration with valid email required to obtain webinar information. The webinar is free of charge and without restrictions.
Search engines form the core of discovery of research these days. There’s just too much information out there to search journal by journal or on a manual basis.
We highlighted in a previous post the advantages of using ScienceOpen’s dual-layered search and filter functions over others like Google Scholar. Today, we’re happy to announce that we just made it even better!
Say you want to search all of PeerJ’s content. Pop ‘PeerJ’ into the journal search, and it’ll come up with all their content, as it’s all indexed in PubMed. Hey presto, there you have 1530 papers, all with full texts attached. Neat eh! And that will update as more gets published with PeerJ, so you know what to do.
But that’s a lot of content. What you’ve just discovered is the PeerJ megajournal haystack. We want to filter out the needles.
Eugene Garfield, one of the founders of biliometrics and scientometrics, once claimed that “Citation indexes resolve semantic problems associated with traditional subject indexes by using citation symbology rather than words to describe the content of a document.” This statement led to the advent and a new dawn of Web-based measurements of citations, implemented as a way to describe the academic re-use of research.
However, Garfield had only reached a partial solution to a problem about measuring re-use, as one of the major problems with citation counts is that they are primarily contextless: they don’t tell us anything about why research is being re-used. Nonetheless, citation counts are now at the very heart of academic systems for two main reasons:
They are fundamental for grant, hiring and tenure decisions.
They form the core of how we currently assess academic impact and prestige.
Working out article-level citation counts is actually pretty complicated though, and depends on where you’re sourcing your information from. If you read the last blog post here, you’ll have seen that search results between Google Scholar, Web of Science, PubMed, and Scopus all vary to quite some degree. Well, it is the same for citations too, and it comes down to what’s being indexed by each. Scopus indexes 12,850 journals, which is the largest documented number at the moment. PubMed on the other hand has 6000 journals comprising mostly clinical content, and Web of Science offers broader coverage with 8700 journals. However, unless you pay for both Web of Science and Scopus, you won’t be allowed to know who’s re-using work or how much, and even if you are granted access, both services offer inconsistent results. Not too useful when these numbers matter for impact assessment criteria and your career.
Cartoonstock is the source of Hagen Cartoons’ Struggling scientists.
Google Scholar, however, offers a free citation indexing service, based, in theory, on all published journals, and possibly with a whole load of ‘grey literature’. For the majority of researchers now, Google Scholar is the go-to powerhouse search tool. Accompanying this power though is a whole web of secrecy: it is unknown who Google Scholar actually crawls, but you can bet they reach pretty far given by the amount of self-archived, and often illegally archived, content they return from searches. So the basis of their citation index is a bit of mystery and lacking any form of quality control, and confounded by the fact that it can include citations from non-peer-reviewed works, which will be an issue for some.
Academic citations represent the structured genealogy or network of an idea, and the association between themes or topics. I like to think that citation counts tell us how imperfect our knowledge is in a certain area, and how much researchers are working to change that. Researchers quite like citations; we like to know how many citations we’ve got, and who it is who’s citing and re-using our work. These two concepts are quite different: re-use can be reflected by a simple number, which is fine in a closed system. But to get a deeper context of how research is being re-used and to trace the genealogy of knowledge, you need openness.
At ScienceOpen, we have our own way to measure citations. We’ve recently implemented it, and are only just beginning to realise the importance of this metric. We’re calling it the Open Citation Index, and it represents a new way to measure the retrieval of scientific information.
But what is the Open Citation Index, and how is it calculated? The core of ScienceOpen is based on a huge corpus of open access articles drawn primarily from PubMed Central and arXiv. This forms about 2 million open access records, and each one comes with its own reference list. What we’ve done using a clever metadata extraction engine is to take each of these citations and create an article stub for them. These stubs, or metadata records, form the core of our citation network. The number of citations derived from this network are displayed on each article, and each item that cites another can be openly accessed from within our archive.
Visualising citation networks: pretty, but complex. (Source)
So the citation counts are based exclusively on open access publications, and therefore provide a pan-publisher, article-level measure of how ‘open’ your idea is. Based on the way these data are gathered, it also means that every article record has had at least one citation, and therefore we explicitly provide a level of cross-publisher content filtering. It is pertinent that we find ways to measure the effect of open access, and the Open Citation Index provides one way to do this. For researchers, the Open Citation Index is about gaining prestige in a system that is gradually, but inevitably and inexorably, moving towards ‘open’ as the default way of conducting research.
In the future, we will work with publishers to combine their content with our archives and enhance the Open Citation Index, developing a richer, increasingly transparent and more precise metric of how research is being re-used.
The amount of published scientific research is simply enormous. Current estimates are over 70 million individual research articles, with around 2 million more being published every year. We are in the midst of an information revolution, with the World Wide Web offering rapid, structured and practical distribution of knowledge. But for researchers, this creates the monolith task of manually finding relevant content to fuel their work, and begs the question, are we doing the best we can to leverage this knowledge?
There are already several well-established searchable archives, scientific databases representing warehouses for all of our knowledge and data. The most well-known include the Web of Science, Scopus, PubMed, and Google Scholar, which together are the de facto mode for current methods of information retrieval. The first two of these are paid services, and attempts to replicate searches between all platforms produce inconsistent results (e.g., Bakkalbasi et al., Kulkarni et al.), raising questions about each of their methods of procurement. The search algorithms for each are also fairly opaque, and the relative reliability of each is quite uncertain. Each of them, though, have their own benefits and pitfalls, which are far better discussed elsewhere (e.g. Falagas et al.).
So where does this leave discoverability for researchers in a world that is becoming more and more ‘open’?
Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.