I want to share with you something cool that we have developed at ScienceOpen.
In my former life, as an editor working for a traditional scientific publisher, I had a broad overview of my subject area, but my level of expertise was not close to that of a practicing researcher working in the field. Every day I needed to answer questions like “Who is the most influential researcher in niche area X?”; “How does our recently published work stack up against similar articles Y?”; “Are people talking more about topic A or B?”.
Editors are not alone with these pressing questions. Everyone who searches for information in a field beyond their immediate expertise faces similar problems. In an Elsevier study 87% of researchers reported cross-disciplinary searching in new fields at least once a month.
So what was my solution at the time? Back then, in our small publishing house, a subscription to privately held scholarly databases that could run to ten or twenty thousand dollars, was just out of the question. We could make an educated guess; but knowledge is always preferable to guessing. So, we ended up taking the subway across town to use the major databases that were only available at the library. In those days, I would have done anything for a freely available open citation network that could tell me the top cited papers and authors across all publishers, recommend related articles, and show what topics are getting the most traction in the popular media.
What did I have to do to get my freely available open citation network? Together with the ScienceOpen Team WE BUILT ONE!! This tool is so awesome that I constantly have to stop myself from accosting strangers on the subway to tell them how much easier we just made their search experience. “Forget about the library,” in case they are on their way to access Web of Science or Scopus, “you can search from your home, office, or right now on your smart phone!”
So how does it work? ScienceOpen already covers over 10 million articles and is growing fast. Type in your search term and filter your results in a myriad of ways. Only articles published in the last two years? Easy. Only Open Access? Check. Even while using these criteria, a search for “Diabetes” brings back 13,053 results. Dilemma. What to read? Sort your results by “Cited by count”. The citation numbers don’t claim to be comprehensive, but they do provide an accurate picture of the relationships between citations on the site. And already, it’s made it easier for me to get a quick overview of what the community finds most important. I can also start asking questions like: why are some papers with an Altmetric score of over 500 cited 20 times, and other papers with an Altmetric score of 3 cited hundreds of times?
When I pick a paper to explore more deeply, ScienceOpen offers me the list of the paper references – sorted by citation number, a list of cited authors linked to their other publications in the network, and similar articles based on keywords and title. I can play with this tool all day. But if I need to find a reviewer, a collaborator, an author, an expert, then I am already well on my way. No more long subway rides to access privately held scholarly databases.
Try out this new ScienceOpen feature and tell a friend (but maybe not a stranger on the subway!).
I wrote this post on the plane back from my trip to Shanghai after a multiple day delay that (looking on the bright side) allowed me to see some of the sights courtesy of Hainan Airways!
I was invited to speak at the 3rd International Academic Publishing Forum on August 19th. Organized by the Shanghai Jiao Tong University press, the event brought together nearly 60 Chinese University Presses and representatives from some Western academic publishers – Elsevier, Wiley, Springer, Sage, Brill and ScienceOpen –to discuss what we can learn from one another.
My most powerful impression was the high value China places on knowledge. Mr. Shulin Wu, Vice-Chairman of the Publishers Association of China said in his in his keynote speech that the government regards “knowledge production to be as important as mining or oil”. And China is set to surpass both the US and the EU in spending on research and development by 2020. Communicating this knowledge, therefore, also has a high priority and falls mainly to the university presses. Their main short-term goals expressed over the two days were internationalization and digitalization of their content, with language seen as the main hurdle. Certainly all had a plan for going global.
But some publishers, including myself, were already thinking beyond internationalization and digitalization to the next step in academic publishing. Jason Wu hit the nail on the head by describing Wiley’s process of transformation “from publishing business to global provider of knowledge and learning services.” Solutions for researchers must be digital, global, mobile, interdisciplinary (Bryan Davies of Elsevier quoted a study that found 44% of researchers look for information outside of their own field). And Open Access is a good place to start.
The Open Access business model for journal publishing is perfect for Chinese publishers who have until now been dependent on cooperation with Western publishers to get their authors heard. Chinese scientists who do world-class research can publish in “world-class” journals such as Science or Nature, but publishers here were asking the hard question of themselves – why are so few of those world-class journals published in China? While Open Access cannot itself address the problem of reputation, it can insure that research can be read immediately and globally, without a team of sales representatives on every continent. As essentially non-profit entities with a mission to communicate China’s research successes to the world they are uniquely situated. With access to so much outstanding research, I sincerely hope that Chinese publishers will embrace this opportunity.
Taking the Shanghai subway I can attest that young Chinese are constantly networking on their mobile devices. A scientific networking and research platform like ScienceOpen in China would have a good chance to catch the imagination of young scientists. But time will tell how open this generation will be allowed to be. During my stay the Chinese government shut down up to 50 online news websites and nearly 400 Weibo and WeChat accounts for spreading “rumours” of the recent chemical explosion which took 129 lives. Twitter, Facebook, Google and many other sites were blocked during my visit, which left me feeling rather cut off from the rest of the world.
It was a crazy week – from the crowds and flashing neon of Shanghai to the peaceful magnificence of the Great Wall. I came away with a sense of the huge potential in China and the feeling that China needs Open Access and the Open Access movement needs China.
Here at ScienceOpen we wear a few different hats! We’re a gold Open Access (OA) publisher, a peer review reformer and a content aggregator.
This week, with the London Book Fair 2015 about to start, we are celebrating publishers and societies by profiling the innovative ways that they are using our platform!
It gives us great pleasure to report how a top scientific union and a major medical publisher (see below) are now using our platform to give their OA content increased visibility and facilitate scientific discussion.
With 1.5 million OA articles and a high performance search engine on ScienceOpen, users can slice and dice the content as they like. And often that selection criteria may be a trusted publisher or innovative journal. ScienceOpen is making that easy! With ScienceOpen Collections we’re able to highlight the articles of publishers and societies. Other innovative ways to use the Collection Tool are discussed in this blog post.
It’s March and so naturally the upcoming whirlwind of large scholarly conferences is on my mind. If I was still in the USA, I might also be participating in a friendly Basketball bet!
I recently attended the 5th International Conference of the Flow Chemistry Society in Berlin. It was expertly organized by SelectBio and featured everything that we expect from a scholarly conference – top scientists as keynote speakers, a poster session for Earlier Career Researchers to present preliminary data and, most importantly, coffee breaks to raise our energy so that we can exchange ideas with other participants.
But one innovation struck me: each participating poster exhibitor had been offered the opportunity to publish their poster via e-Posters. Because ScienceOpen also offers poster publishing (now free of charge), I was interested to exchange experiences with them. I had a great talk with Sara Spencer about how poster publishing can support researchers by encouraging discussion of their work after the conference or with colleagues who were not able to attend. Publishing them on a platform that provides each one with a DOI, as we do here at ScienceOpen, also means that the author can be credited if the poster is, for example, photographed and shared on Facebook.
However, both Sara and I have also observed that scientists are sometimes hesitant to “publish” their posters at all which surprised us since the benefits seem clear. The two most frequent questions about poster publishing that we encounter are:
What is the advantage of publishing my poster?
Some posters get hung in the department hallway but most end their lives rolled up under a desk somewhere. By making your poster digitally available beyond its physical presence at a conference, you can extend the discussion of your research and possibly even find new collaborators. Of course, you can also do this by posting it on your website or in a repository. But by publishing it under a CC BY license and with a CrossRef registered DOI, you also make it possible to track the impact it has by recording altmetrics such as downloads, social shares etc – making it a much more valuable asset for your CV.
This is preliminary research, can I publish these results later as a research article?
Most publishers recognize that science cannot move forward in a communication vacuum and rules around sharing are changing with the rise of online discussion forums. No one is quite sure where the new lines on such issues will be drawn. Scientists regularly share their preliminary research at conferences in the form of talks and posters or on pre-print servers such as arXiv or BiorXiv. Early feedback can save a researcher time and funding dollars.
The scientific community understands that there is a big difference between preliminary results presented in a pre-print or a poster and a full research paper. Most journal editors also have no problem making this distinction. A list of the pre-print policies of major academic journals can be found on Wikipedia. A list of how different journals view F1000Posters (and most do not regard them as pre-publication) can be found here.
However, it’s important to know that some journals do still regard posters as prior-publication and these include some big names such as the journals of the American Chemical Society; Royal Society of Chemistry; American Physiological Society; American Microbiology Society and the NEJM. When we contacted some poster session organizers at a large society conference about the possibility of publishing this content with DOI on ScienceOpen, one of them checked back in with the Society for their view and received this ominous warning:
We would caution you, and we would ask you to caution your presenters, that intellectual property rights issues, such as patent or other proprietary concerns, may be implicated by agreeing to the publication of posters.
Our answer to the above statement is to ask “how so?” Whether the author retains copyright and grants a CC license to publish or gives copyright to the publisher, then how is the IP of a poster different from that of an article? If they mean, as stated on the F1000Poster list, that they consider the limited and often preliminary content displayed on posters from Earlier Career Researchers to be prior publication then we say “good luck with that view in the digital age”!
What seems more likely to us is that large traditional publishers are using the same IP “scare tactics” that we last saw in the early days of Open Access. What they are trying to do is discourage poster or pre-print publishing (per their restrictive policies on live tweeting at conferences) with DOI because they don’t want these citations to lower the Impact Factors of their journals.
The scientific community is beginning to experiment with the new tools for sharing and networking online and this is putting pressure on established structures and rules. To them we say:
Be sure to publish your posters or pre-prints with a DOI so they can be found and cited. Then publish your subsequent full article with organizations that have progressive policies on prior-sharing, preferably Open Access!
Earlier this summer, I skyped with Damian Pattinson, the Editorial Director of PLOS ONE, about the Impact Factor , its widespread misuse and how, thankfully, altmetrics now offer a better way forward.
Q. The PLOS ONE Impact Factor has decreased for a few years in a row. Is this to be expected given its ranking as the world’s largest journal and remit to publish all good science regardless of impact?
A. I don’t think the Impact Factor is a very good measure of anything, but clearly it is particularly meaningless for a journal that deliberately eschews evaluation of impact in its publications decisions. Our founding principle was that impact should be evaluated post-publication. In terms of the average number of citations per article, my sense is that this is changing due to the expanding breadth of fields covered by PLOS ONE, not to mention its sheer size (we recently published our 100,000th article). When you grow as quickly as we have, your annual average citation rate will always be suppressed by the fact that you are publishing far more papers at the end of the year than at the beginning.
Q. Articles at PLOS ONE undoubtedly vary in terms of the number of citations they accrue. Some are higher, some lower. Is there an observable pattern to this trend overall that is not reflected by a simple read of the Impact Factor?
A. Differences in the average number of citations are, to a large extent, subject specific and therefore a reflection on the size of a particular research community. Smaller fields simply produce fewer scientific papers so statistically it is less likely that even a highly-cited paper will have as many citations as one published in a larger research field. Such a subject-specific examination may also reveal different patterns if one looks at metrics besides citation. That is something we are very interested in exploring with Article-Level Metrics (ALM).
Q. Has the reduction of PLOS ONE’s Impact Factor influenced its submission volume or is that holding up relatively well?
A. Actually, the effective submission volume is still increasing even though the rate of growth has slowed. Year-on-year doubling in perpetuity is not realistic in any arena. We have seen a drop in the number of publications, however, due to a number of factors. Most notably we have seen an increase in the rejection rate as we continue to ensure that the research published in PLOS ONE is of the highest standard. We put all our papers through rigorous checks at submission, including ethical oversight, data availability, adherence to reporting guidelines, and so more papers are rejected before being sent for review. We have also found an increase of submissions better suited for other dissemination channels, and have worked with authors to pursue them. But to your point, I do not think that last year’s changing IF directly affected PLOS ONE submission volume.
Q. Stepping back for a moment, it really is extraordinary that this arguably flawed mathematical equation, first mentioned by Dr Eugene Garfield in 1955, is still so influential. Garfield said “The impact factor is a very useful tool for evaluation of journals, but it must be used discreetly”.
It seems that the use of the IF is far from discreet since it is a prime marketing tool for many organizations, although not at PLOS which doesn’t list the IF on any of its websites (kudos).But seriously, do you agree with Garfield’s statement that the IF has any merit in journal evaluation, or that evaluating journals at all in the digital age has any merit?
A. Any journal level metric is going to be problematic as “journals” continue to evolve in a digital environment. But the IF is particularly questionable as a tool to measure the “average” citation rates of a journal because the distribution is hardly ever normal – in most journals a few highly cited papers contribute to most of the IF while a great number of papers are hardly cited at all. The San Francisco Declaration on Research Assessment (DORA) is a great first step in moving away from using journal metrics to measure things they were never intended to measure and I recommend everyone to sign it.
Q. What are the main ways that the IF is misused, in your opinion?
A. The level to which the IF has become entrenched in the scientific community is amazing. Grants, tenure, hiring at nearly every level depend to the IF of the journals in which a researcher publishes his or her results. Nearly everyone realizes that it is not a good way to measure quality or productivity, but use it anyway. Actually it’s more complicated than that – everyone uses it because they think that everyone else cares about it! So academics believe that their institutions use it to decide tenure, even when the institutions have committed not to; institutions think that the funders care about it despite commitments to the contrary. In some way the community itself needs to reflect on this and make some changes. The IF creates perverse incentives for the entire research community, including publishers. Of course journals try to improve their score, often in ways that is damaging to the research community. Because of how the IF is calculated, it makes sense to publish high impact papers in January so that they collect citations for the full 12 months. Some journals hold back the best papers for months to increase the IF – which is bad for both the researchers as well as the whole of science. Journals also choose to publish papers that may be less useful to researchers simply because they are more highly cited. So they will choose to publish (often unnecessary) review articles, while refusing to publish negative results or case reports, which will be cited less often (despite offering more useful information).
Q. Could you imagine another metric which would better measure the output of journals like PLOS ONE?
A. Of course you are right, for journals that cover a broad range of disciplines or for interdisciplinary journals, the Impact Factor is even less useful because of the subject-specific statistics we spoke of earlier. There have been a number of newcomers such as ScienceOpen, PeerJ and F1000Research with a very broad scope – as these and other new platforms come into the publishing mainstream, we may find new metrics to distinguish strengths and weaknesses. Certainly the Impact Factor is not the best mechanism for journal quality and, even less so, researcher quality.
Q. How do you feel about ScienceOpen Advisory Board Member Peter Suber’s statement in a recent ScienceOpen interview that the IF is “an impact metric used as a quality metric, but it doesn’t measure impact well and doesn’t measure quality at all.”
A. How often a paper is cited in the scholarly literature is an important metric. But citations are a blunt tool at best to measure research quality. We do not know anything about the reason a paper was cited – it could be in fact to refute a point or as an example of incorrect methodology. If we only focus on citations, we are missing a more interesting and powerful story. With ALMs that also measure downloads, social media usage, recommendations, and more, we find huge variations in usage. In fields beyond basic research such as clinical medicine or applied technology fields which have implications for the broader population, a paper may have a big political impact, even though it is not highly cited. ALMs are really starting to show us the different ways different articles are received. At the moment we do not have a good measure of quality, but we believe reproducibility of robust results are key.
At PLOS we have been at the forefront of this issue for many years, and are continuing to innovate to find better ways of measuring and improving reproducibility of the literature. With our current focus on “impact” we are disproportionately rewarding the “biggest story” which may have an inverse relationship to reproducibility and quality.
Q. PLOS has a leadership role within the Altmetrics community. To again quote ScienceOpen Advisory Board Member Peter Suber on the current state of play: “Smart people are competing to give us the most nuanced or sensitive picture of research impact they can. We all benefit from that.”
Did PLOS predict the level to which the field has taken off and the amount of competition within it or is the organization pleasantly surprised?
A. The need was clearly there and only increasing over time. When we began our Article-Level Metrics (ALM) work in 2009, we envisioned a better system for all scholars. This is certainly not something specific to Open Access.
Since then, the particular details of how we might better serve science continue to evolve, especially now that the entire community has begun to actively engage with these issues together. It’s great that there is increasing awareness that the expanding suite of article activity metrics cannot fully come of age until data are made freely available for all scholarly literature and widely adopted. Only then can we better understand what the numbers truly mean in order to appropriately apply them. We anticipate that open availability of data will usher in an entire vibrant sector of technology providers that each add value to the metrics in novel ways. We are seeing very promising developments in this direction already.
Q. What’s next for PLOS ALM in terms of new features and developments?
A. Our current efforts are primarily focused on developing the ALM application to serve the needs not only of single publishers but of the entire research ecosystem. We are thrilled too that the community is increasingly participating in this vision, as the application grows into a bona fide open source community project with collaborators across the publishing sector, including CrossRef. On the home front, the application is essentially an open framework that can capture activity on scholarly objects beyond the article, and we’ll be exploring this further with research datasets. Furthermore, we will be overhauling the full display of ALM on the article page metrics tab with visualizations that tell the story of article activity across time and across ALM sources. We will also release a set of enhancements to ALM Reports so that it better supports the wide breadth of reporting needs for researchers, funders, and institutions.
David Black is Secretary General of the International Council for Science (ICSU) and Professor of Organic Chemistry at the University of New South Wales, Australia. An advocate of Open Access for scientific data in his role at ICSU, Professor Black is a proponent of the initiatives of ICSU and ICSU-affiliate groups, such as the Committee on Freedom and Responsibility in the Conduct of Science (CFRS), the ICSU-World Data System (ICSU-WDS), the International Council for Scientific and Technical Information (ICSTI), the ICSU’s Strategic Coordinating Committee on Information and Data (SCCID), Continue reading “ScienceOpen Interview with David Black, Secretary General, International Council for Science.”
Today, we’re featuring a video clip by one of our newest Editorial Board members, Robson Santos:
Professor Santos is Full Professor at the Federal University of Minas Gerais, Brazil at the Institute of Biological Sciences. He is secretary of the Inter-American Society of Hypertension and president of the Brazilian Society of Hypertension Scientific Committee. He has been coordinator of the Laboratory of Hypertension at the Federnal University of Minas Gerais since 1985, has supervised more than 30 MSc and more than 25 doctoral students, and published over 150 articles in international publications. Continue reading “ScienceOpen Editorial Board: Robson Santos, Federal University of Minas Gerais, Brazil”
Our interview series continues with a quick chat with ScienceOpen Editorial Board Member and recent ScienceOpen author Professor Miguel Andrade. His paper, entitled “FASTA Herder: A web application to trim protein sequence sets,” ( http://goo.gl/4qa7Ez ) presents a publicly available web application that uses an algorithm to identify redundant sequence homologs in protein databases.