Last week, we unveiled MyScienceOpen, our new professional networking platform for researchers. MySciencOpen comes with a whole cadre of new features that combine the functionality of a range of existing social platforms, and bring them directly to you all in one place. You get access to all of this at the click of a button through our ORCID integration. Free, legal, and easy!
But how can you track this increased impact? Aha, well, we thought of that. Every user at ScienceOpen now has a whole suite of brand new metrics that track how their content is being re-used across ScienceOpen.
Quantifying your context
Everyone can see the tab for your content in context. These show how many total ‘nodes’ or connections your work forms in the context of the ScienceOpen platform. We show:
Your total number of publications
Your total number of different journals published in
We recognise that some times it’s not clear exactly what you’re supposed to do when joining a new research platform. What are the important features, what’s everybody else doing, how do I make my profile as strong as possible? Well, hopefully this will make it easier for you. If you’re still wondering ‘What’s that ScienceOpen thing all about?’, hopefully this will add a bit of clarity too!
Here are the main things you need to know about ScienceOpen:
Get an ORCID account
More than 3 million researchers already have an ORCID account, which acts as both a unique identifier and an integrated profile for them. Registration for it takes 30 seconds, and is now a core part of scholarly infrastructure, with many journals requiring an ORCID profile prior to article submission. Make sure it’s well-populated with all of your published papers, (drawn automatically from Web of Science, Scopus, or CrossRef). Easy!
Authors are undoubtedly the best positioned to promote their own research. They know it inside out, they know people who might be most interested in it, and they know the places to maximise the potential audience. But still, with an increasing number of publications every year, it is important that researchers know how to promote their research to maximum effect, whether it is Open Access or not.
Here are our top ten suggestions to help increase your reach and impact! Most of these fall under two categories: Networking and maintaining your digital identity, and sharing your research to enhance its impact. Both are important in a modern scholarly environment, and can help to give you that competitive edge while making sure your’re maximising the potential of your research.
As part of our ongoing development of ScienceOpen 2.017, we have designed an exciting and most importantly, pretty, new context-enhanced webpage for each of our 27 million article records. Such enriched article metadata is becoming increasingly important in defining the context of research in the evolution of scholarly communication, in which we are moving away from journal- to article-level evaluation.
Statistically significant upgrades
All of the statistics have been moved to the top of the page, including the number of page views or readers, the Altmetric score, the number of recommendations, and the number of social media shares.
Newly featured statistics include the top references cited within, the top articles citing that paper, and the number of similar articles based on keywords and topics. These new features are great for authors as content creators, researchers as users, as well as publishers for understanding the popularity and context of research they publish.
We publish from across the whole spectrum of research: Science, Technology, Engineering, Humanities, Mathematics, Social Sciences. Every piece of research deserves an equal chance to be published, irrespective of its field.
We also don’t discriminate based on the type of research. Original research, small-scale studies, opinion pieces, “negative” or null findings, review articles, data and software articles, case reports, and replication studies. We publish it all.
At ScienceOpen, we believe that the Journal Impact Factor (JIF) is a particularly poor way of measuring the impact of scholarly publishing. Furthermore, we think that it is a highly misleading metric for research assessment despite its widespread [mis-]use for this, and we strongly encourage researchers to adhere to the principles of DORA and the Leiden Manifesto.
This is why for our primary publication, ScienceOpen Research, we do not obtain or report the JIF. We provide article-level metrics and a range of other article aspects that provide and enhance the context of each article, and extend this to all 25 million research articles on our platform.
A simple proposal for the publication of journal citation distributions (link)
How can academia kick its addiction to the impact factor (link)
Free to publish Open Access journals offer an incredible service to the research community and broader public, with editors often working long hours with no compensation. We want to recognise this effort and reward it with free indexing on our platform!
More visibility for your journal
Journals indexed on ScienceOpen:
Reach new audiences and maximize your readership
Drive more usage to your journals
Upload your content to a unique search/discovery and communication platform
Open up the context of your content
What do we need from you?
An application form can be found here. Fill it out, and submit to our team. Simple!
On the last day of every month, we will select and announce the winners via social media, and begin the next cycle! Out of the applicants, we will select up to 10 journals per month for free indexing, and the best application will get a free featuredjournal collection too! All others will roll over into the next month.
At ScienceOpen, we’re constantly upgrading our platform to provide the best possible user interaction experience. We get feedback from the research community all the time, and try to adapt to best meet their needs.
So today, we’re happy to announce two neat little features in our latest updates.
Firstly, all Open Access articles now have a cute little symbol next to them, making it even easier for you to discover open content. This shows up on all of our Open Access content across nearly 14 million article records now. Making open content stand out is a great way to encourage others to adopt open practices, as well as help people see which content they can re-use most easily.
As well as this, we have a new browsing function built into our collections. Sometimes, collections are pretty big. Our new SciELO collections have some with tens of thousands of open access articles, and sifting through that manually is not exactly a valuable use of ones time.
With this new function, you can now filter content within collections by journal, publisher, keywords, and even filter them by citations or Altmetric scores. Discovering content relevant to your research should be smart and efficient, and this is what our platform delivers. Try it out on this collection, or build your own!
Context is something we’ve been thinking a lot about at ScienceOpen recently. It comes from the Latin ‘con’ and ‘texere’ (to form ‘contextus’), which means ‘weave together’. The implications for science are fairly obvious: modern research is about weaving together different strands of information, thought, and data to place your results into the context of existing research. This is the reason why we have introductory and discussion sections at the intra-article level.
But what about context at a higher level?
Context can defined as: “The circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.” Simple follow on questions might be then, what is the context of a research article? How do we define that context? How do we build on that to do science more efficiently? The whole point for the existence of research articles is that they can be understood by as broad an audience as possible so that their re-use is maximised.
There are many things that impinge upon the context of research. Paywalls, secretive and exclusive peer review, lack of discovery, lack of inter-operability, lack of accessibility. The list is practically endless, and a general by-product of a failure for traditional scholarly publishing models to embrace a Web-based era.
Well, we’ve had some absolute stars recently in our ‘open science’ series! If you haven’t seen them yet, head over and check them out – such a diverse array of experiences and perspectives! Today we spoke with Josh King, the founder of Brevy. It’s an awesome new platform, and we’ll let Josh tell you more about it here, enjoy!
Hi Josh, thanks for joining us! Could you tell us a bit about why you started Brevy?
Brevy is an independent, volunteer group of a few stubborn individuals who work on the project during our off hours (read “nights and weekends”). While my own day job is in science outreach, I work with a couple of other partners (a fantastic computer science start-up owner and a behavioural psychologist make up our merry band) to help direct and maintain the site. We’re nothing special on our own, so the real stars here are those that pitch a hand adding summaries to Brevy or introducing it as class assignments to help grow the body of content!
When did you first hear about Open Access and Open Science? What did you first think?
That would likely be during my undergraduate years studying biochemistry and becoming hopelessly frustrated trying to write reports using papers I often had no access to (even with our university library!). At the time, I thought the concepts as fanciful dreams, but thankfully here we are with open access a growing paradigm and various open science platforms blossoming around the web.
What do you think the biggest problem with the current scholarly publishing system is?
Meaningful publishing. By reasonable estimates, at least more than a 1,000,000 academic papers are published each year. These works are published on platforms known largely only to academics, and then only to that specific subset of academia. Publications on these platforms are not always accessible even to this select group and generally do not well support further dialogue or dissemination, with a surprisingly significant number going uncited. Taken pessimistically, this is tantamount to ejecting hundreds of thousands of new pieces of knowledge into the void each year.
We can be optimistic about this however! Taken optimistically, there are hundreds of thousands of possibly exciting and ground-breaking new ideas all of the time that most of us don’t know about! But to see it this way, to truly believe it, we have to start caring about the meaningfulness of research. We have to start thinking about different types of impacts than citation count and means of prestige other than the journal name. And we have to care what our work means to the world outside academia.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.