Blog
About

Author: Jon Tennant

In:  Other  

Collections as the future of academic-led journals

ScienceOpen Collections are thematic groups of research articles that transcend journals and publishers to transform how we collate and build upon scientific knowledge.

What are Collections

The modern research environment is a hyper-dimensional space with a vast quantity of outputs that are impossible to manually manage. You can think of research like a giant Rubik’s cube: you have different ‘colours’ of research that you have to mix and match and play around with to discover how the different sections fit together to become something useful.

CC BY-SA 3.0,  Booyabazooka (Wikipedia)
CC BY-SA 3.0, Booyabazooka (Wikipedia)

We view Collections as the individual faces of a Rubik’s cube. They draw from the vast, and often messy, pool of published research to provide an additional layer of context and clarity. They represent a new way for researchers to filter the published record to discover and curate content that is directly relevant to them, irrespective of who published it or what journal it appears in.

Advantages of Collections

Perhaps the main advantage of Collections to researchers is that they are independent of journals or publishers and their branding criteria. Researchers are undoubtedly the best-placed to assess what research is relevant to themselves and their communities. As such, we see Collections as the natural continuing transformation of the concept of the modern journal, acting in almost full cycle to return them to their basic principles.

The advantage of using Collections is that they provide researchers with the power to filter and select from the published record and create what is in essence a highly-specialised virtual journal. This means that Collections are not pre-selective, but instead comprise papers discriminated only by a single criterion: research that is relevant to your peers, and also deemed relevant by them.

Filtering for Collections occurs at different levels depending on scope or complexity of research. For example, Collections can be designed to focus on different research topics, lab groups or research groups, communities, or even departments or institutions. Collections can also be created for specific conferences and include posters from these, published on ScienceOpen. You define the scope and the selection criteria.

Continue reading “Collections as the future of academic-led journals”  

The Open Citation Index

Eugene Garfield, one of the founders of biliometrics and scientometrics, once claimed that “Citation indexes resolve semantic problems associated with traditional subject indexes by using citation symbology rather than words to describe the content of a document.” This statement led to the advent and a new dawn of Web-based measurements of citations, implemented as a way to describe the academic re-use of research.

However, Garfield had only reached a partial solution to a problem about measuring re-use, as one of the major problems with citation counts is that they are primarily contextless: they don’t tell us anything about why research is being re-used. Nonetheless, citation counts are now at the very heart of academic systems for two main reasons:

  • They are fundamental for grant, hiring and tenure decisions.
  • They form the core of how we currently assess academic impact and prestige.

Working out article-level citation counts is actually pretty complicated though, and depends on where you’re sourcing your information from. If you read the last blog post here, you’ll have seen that search results between Google Scholar, Web of Science, PubMed, and Scopus all vary to quite some degree. Well, it is the same for citations too, and it comes down to what’s being indexed by each. Scopus indexes 12,850 journals, which is the largest documented number at the moment. PubMed on the other hand has 6000 journals comprising mostly clinical content, and Web of Science offers broader coverage with 8700 journals. However, unless you pay for both Web of Science and Scopus, you won’t be allowed to know who’s re-using work or how much, and even if you are granted access, both services offer inconsistent results. Not too useful when these numbers matter for impact assessment criteria and your career.

struggling-scientist
Cartoonstock is the source of Hagen Cartoons’ Struggling scientists.

Google Scholar, however, offers a free citation indexing service, based, in theory, on all published journals, and possibly with a whole load of ‘grey literature’. For the majority of researchers now, Google Scholar is the go-to powerhouse search tool. Accompanying this power though is a whole web of secrecy: it is unknown who Google Scholar actually crawls, but you can bet they reach pretty far given by the amount of self-archived, and often illegally archived, content they return from searches. So the basis of their citation index is a bit of mystery and lacking any form of quality control, and confounded by the fact that it can include citations from non-peer-reviewed works, which will be an issue for some.

Academic citations represent the structured genealogy or network of an idea, and the association between themes or topics. I like to think that citation counts tell us how imperfect our knowledge is in a certain area, and how much researchers are working to change that. Researchers quite like citations; we like to know how many citations we’ve got, and who it is who’s citing and re-using our work. These two concepts are quite different: re-use can be reflected by a simple number, which is fine in a closed system. But to get a deeper context of how research is being re-used and to trace the genealogy of knowledge, you need openness.

At ScienceOpen, we have our own way to measure citations. We’ve recently implemented it, and are only just beginning to realise the importance of this metric. We’re calling it the Open Citation Index, and it represents a new way to measure the retrieval of scientific information.

But what is the Open Citation Index, and how is it calculated? The core of ScienceOpen is based on a huge corpus of open access articles drawn primarily from PubMed Central and arXiv. This forms about 2 million open access records, and each one comes with its own reference list. What we’ve done using a clever metadata extraction engine is to take each of these citations and create an article stub for them. These stubs, or metadata records, form the core of our citation network. The number of citations derived from this network are displayed on each article, and each item that cites another can be openly accessed from within our archive.

citation_network
Visualising citation networks: pretty, but complex. (Source)

So the citation counts are based exclusively on open access publications, and therefore provide a pan-publisher, article-level measure of how ‘open’ your idea is. Based on the way these data are gathered, it also means that every article record has had at least one citation, and therefore we explicitly provide a level of cross-publisher content filtering. It is pertinent that we find ways to measure the effect of open access, and the Open Citation Index provides one way to do this. For researchers, the Open Citation Index is about gaining prestige in a system that is gradually, but inevitably and inexorably, moving towards ‘open’ as the default way of conducting research.

In the future, we will work with publishers to combine their content with our archives and enhance the Open Citation Index, developing a richer, increasingly transparent and more precise metric of how research is being re-used.

Moving beyond a journal-based filtering system

The amount of published scientific research is simply enormous. Current estimates are over 70 million individual research articles, with around 2 million more being published every year. We are in the midst of an information revolution, with the World Wide Web offering rapid, structured and practical distribution of knowledge. But for researchers, this creates the monolith task of manually finding relevant content to fuel their work, and begs the question, are we doing the best we can to leverage this knowledge?

There are already several well-established searchable archives, scientific databases representing warehouses for all of our knowledge and data. The most well-known include the Web of Science, Scopus, PubMed, and Google Scholar, which together are the de facto mode for current methods of information retrieval. The first two of these are paid services, and attempts to replicate searches between all platforms produce inconsistent results (e.g., Bakkalbasi et al., Kulkarni et al.), raising questions about each of their methods of procurement. The search algorithms for each are also fairly opaque, and the relative reliability of each is quite uncertain. Each of them, though, have their own benefits and pitfalls, which are far better discussed elsewhere (e.g. Falagas et al.).

So where does this leave discoverability for researchers in a world that is becoming more and more ‘open’?

Continue reading “Moving beyond a journal-based filtering system”  

In:  Peer Review  

Pre- or post-publication peer review

Traditional models of peer review occur pre-publication by selected referees and are mediated by an Editor or Editorial Board. This model has been adopted by the vast majority of journals, and acts as the filter system to decide what is considered to be worthy of publication. In this traditional pre-publication model, the majority of reviews are discarded as soon as research articles become published, and all of the insight, context, and evaluation they contain are lost from the scientific record.

Several publishers and journals are now taking a more adventurous exploration of peer review that occurs subsequent to publication. The principle here is that all research deserves the opportunity to be published, and the filtering through peer review occurs subsequent to the actual communication of research articles. Numerous venues now provide inbuilt systems for post-publication peer review, including ScienceOpen, RIO, The Winnower, and F1000 Research. In addition to those adopted by journals, there are other post-publication annotation and commenting services such as hypothes.is and PubPeer that are independent of any specific journal or publisher and operate across platforms.

A potential future system (source)
A potential future system of peer review (source)

Continue reading “Pre- or post-publication peer review”  

In:  Peer Review  

Should peer review reports be published

One main aspect of open peer review is that referee reports are made publicly available after the peer review process. The theory underlying this is that peer review becomes a supportive and collaborative process, viewed more as an ongoing dialogue between groups of scientists to progressively asses the quality of research. Furthermore, it opens up the reviews themselves to analysis and inspection, which adds an additional layer of quality control into the review process.

This co-operative and interactive mode of peer review, whereby it is treated as a conversation rather than a selection system, has been shown to be highly beneficial to researchers and authors. A study in 2011 found that when an open review system was implemented, it led to increasing co-operation between referees and authors as well as an increase in the accuracy of reviews and overall decrease of errors throughout the review process. Ultimately, it is this process which decides whether research is suitable or ready for publication. A recent study has even shown that the transparency of the peer review process can be used to predict the quality of published research. As far as we are aware, there are almost no drawbacks, documented or otherwise, to making referee reports openly available. What we gain by publishing reviews is the time, effort, knowledge exchange, and context of an enormous amount of currently secretive and largely wasted dialogue, which could also save around 15 million hours per year of otherwise lost work by researchers.

 

Source
Source

Continue reading “Should peer review reports be published”  

In:  Peer Review  

Peer review: open sesame?

Open peer review has many different aspects, and is not simply about removing anonymity from the process. Open peer review forms part of the ongoing evolution of an open research system, and the transformation of peer review into a more constructive and collaborative process. The ultimate goal of traditional peer review remains the same – to make sure that the work of authors gets published to an acceptable standard of scientific rigour.

There are different levels of bi-directional anonymity throughout the peer review process, including whether or not the referees know who the authors are but not vice versa (single blind review), or whether both parties remain anonymous to each other (double blind review). Open peer review is a relatively new phenomenon (initiated in 1999 by the BMJ) one aspect of which is that the authors and referees names are disclosed to each other. The foundation of open peer review is based on transparency to avoid competition or conflicts born out through the fact that those who are performing peer review will often be the closest competitors to the authors, as they will tend to be the most competent to assess the research.

Continue reading “Peer review: open sesame?”  

In:  Other  

Research on Zika virus free to publish via ScienceOpen

Two days ago, the World Health Organisation declared that the threat of the Zika virus disease in Latin America and the Caribbean constituted a Public Health Emergency of International Concern.

The decision was based on the outbreak of clusters of microcephaly and Guillian-Barré syndrome, which are devastating cases of congenital malformation and neurological complications. While a direct causal relationship has yet to be formally stated, the correlation between Zika infection during pregnancy and microcephaly is strongly correlated.

At ScienceOpen, we believe that rapid publication serves the communication of research, and aim to have submitted papers published online within 24-48 hours. For articles relating to the Zika outbreak, we are waiving the usual submission charge, and any published articles will be integrated into our pre-existing research collection on the Zika virus. Articles will receive top priority, and therefore be almost immediately available to the research community, medical professionals, and the wider public. We encourage submission of all articles relating to the virus. Please directly contact Stephanie Dawson for submissions and related enquiries.

Aedis aegypti, one of the culprit mosquitoes. Image: James Gathany, CC BY
Aedis aegypti, one of the culprit mosquitoes. Image: James Gathany, CC BY

There is clearly a need to co-ordinate international efforts, including those of the research community, to investigate and understand the Zika virus better. At ScienceOpen, we want to play our part in facilitating the communication of any such research, and the speedy protection of those at risk. We are happy to join other open access publishers such as F1000 Research and PLOS Current Outbreaks (both of which which publish very rapidly) who have similarly declared that all research published with them on the Zika virus can be published free of charge.

In:  Peer Review  

Credit given where credit is due

For the majority of scientists, peer review is seen as integral to, and a fundamental part of, their job as a researcher. To be invited to review a research article is perceived as a great honour due to its recognition of expertise, and forms part of the duty of a scientist to help progress research. However, the system is in a bit of a fix. With more and more being published every year and ever increasing demands on the time and funds of researchers, the ability to competently perform peer review is dwindling simply due to competition with other aspects of duty. Why, many researchers might ask, should they spend their valuable time reviewing others work for little to no recognition or reward, as is with the traditional model? Indeed, many publishers opine that the greatest value they add is through managing the peer review process, which in many cases is performed on a volunteer basis by academic Editors and referees, and estimated to cost around $1.9 billion in management per year. But who actually gets the recognition and credit for all of this work?

Continue reading “Credit given where credit is due”  

In:  Peer Review  

Advances in peer review

It’s not too hard to see that the practices of and attitudes towards ‘open science’ are evolving amidst an ongoing examination about what the modern scholarly system should look like. While we might be more familiar with the ongoing debate about how to best implement open access to research articles and to the data behind publications, discussions regarding the structure, management, and process of peer review are perhaps more nuanced, but arguably of equal or greater significance.

Peer review is of enormous importance for managing the content of the published scientific record and the careers of the scientists who produce it. It is perceived as the golden standard of scholarly publishing, and for many determines whether or not research can be viewed as scientifically valid. Accordingly, peer review is a vital component at the core of the process of research communication, with repercussions for the very structure of academia which largely operates through a publication-based reward and incentive system.

Continue reading “Advances in peer review”  

In:  Altmetrics  

The relationship between journal rejections and their impact factors

Frontiers recently published a fascinating article about the relationship between the impact factors (IF) and rejection rates from a range of journals. It was a neat little study designed around the perception that many publishers have that in order to generate high citation counts for their journals, they must be highly selective and only publish the ‘highest quality’ work.

Apart from issues involved with what can be seen as wasting time and money in rejecting perfectly good research, this apparent relationship has important implications for researchers. They will tend to often submit to higher impact (and therefore apparently more selective) journals in the hope that this confers some sort of prestige on their work, rather than letting their research speak for itself. Upon the relatively high likelihood of rejection, submissions will then continue down the ‘impact ladder’ until a more receptive venue is finally obtained for their research.

Continue reading “The relationship between journal rejections and their impact factors”