Our ongoing ‘Open Science Stars’ series has highlighted some of the vast variety of views, experiences, and facets of open science, and a cadre of great people working to drive real and positive change. This week, we spoke with Fiona Nielsen, who has founded two companies dedicated to the sharing of genomics data! Here’s her amazing story.
Hi Fiona! Thanks for joining us at the ScienceOpen blog. Could you start off by letting us know a bit about your background?
Pleased to join your blog series. 🙂
I am a bioinformatics researcher with a background in computer science. My first degree was a short computer science degree, which I then expanded by studying bioinformatics at the University of Southern Denmark, where I gradually moved more and more into genetics and DNA sequence analysis. After my masters I moved to Nijmegen, the Netherlands where I studied for a PhD in bioinformatics at the NCMLS. During my time as a PhD student, my mother was diagnosed with cancer, and I lost my motivation to work on scientific topics far removed from patient impact. I moved to Cambridge, UK to work for Illumina, and after two years I decided to leave my 9-5 job to start my own project: I founded first the charity DNAdigest and later the company Repositive to enable better data sharing within genomics research.
When did you first become interested in Open Access and Open Science? What was your initial reaction when you heard about it?
I do not recall when I first came across the terms of Open Access and Open Science, but I do recall that I repeatedly came across anecdotes from colleagues that could not access data or results from published papers, and how I looked up to the progressive researchers who would “go all the way” and make all data and results available immediately, even before publication of a paper.
In a fairly big release today, we are pleased to announce a big new partnership with SciELO, the Scientific Electronic Library Online. Many of you might know SciELO as the leading Open Access publisher in Latin America and what we might consider to be developing or emerging countries. At last count, they had published almost 600,000 peer reviewed research articles in more than 1200 journals, so constitute an enormous contribution to our global research knowledge!
Typically, SciELO content is still largely excluded from what we might consider the ‘research powerhouses’ and “global” indexing platforms of the western world. In 2013, SciELO was integrated into the Web of Science, but only covered around half of their journals. Some SciELO Brazil content is also indexed in Scopus, but this is a pay-to-access service.
As such, simply being Open Access is not sufficient in the current scholarly publishing climate – you have to be promoted, shared, and recognised too! This is crucial for publishers in terms of generating increased visibility, transparency, and credibility for research, all principles embodied by Open Access. So ScienceOpen is partnering with SciELO to generate increased visibility for its content, and to provide an enhanced global perspective on research.
Some might be wondering where you’ve heard of SciELO before. Well, Open Access advocate and keeper of predatory publishing lists Jeffrey Beall publicly commented last year that SciELO was akin to the ‘favelas’ of the scholarly publishing world, and created a bit of a stir. Thankfully, this derogatory and unnecessary characterisation was met with appropriate responses, but revealed a somewhat ingrained cultural perspective that some ‘western’ academics, and those involved in scholarly publishing, might still have: research and publishing from Latin America and peripheral countries is of lower quality than the north, for no apparent reason than geography; a factor which is often referred to as ‘ethnocentric prejudice’.
Well, at ScienceOpen we think such views are not helpful in creating a more global, collaborative and open research foundation. We believe that through integration we are stronger, and that we gain more by transcending barriers than creating them. The future of research is through global collaboration, sharing, and enabling open practices, and this is what we’re doing with SciELO. Indeed, SciELO are arguably doing more to advance Open Access publishing and global knowledge than many well-established publishers in Europe and North America!
Which is why partnering with SciELO is exciting for us for many reasons!
Context is something we’ve been thinking a lot about at ScienceOpen recently. It comes from the Latin ‘con’ and ‘texere’ (to form ‘contextus’), which means ‘weave together’. The implications for science are fairly obvious: modern research is about weaving together different strands of information, thought, and data to place your results into the context of existing research. This is the reason why we have introductory and discussion sections at the intra-article level.
But what about context at a higher level?
Context can defined as: “The circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood.” Simple follow on questions might be then, what is the context of a research article? How do we define that context? How do we build on that to do science more efficiently? The whole point for the existence of research articles is that they can be understood by as broad an audience as possible so that their re-use is maximised.
There are many things that impinge upon the context of research. Paywalls, secretive and exclusive peer review, lack of discovery, lack of inter-operability, lack of accessibility. The list is practically endless, and a general by-product of a failure for traditional scholarly publishing models to embrace a Web-based era.
Well, we’ve had some absolute stars recently in our ‘open science’ series! If you haven’t seen them yet, head over and check them out – such a diverse array of experiences and perspectives! Today we spoke with Josh King, the founder of Brevy. It’s an awesome new platform, and we’ll let Josh tell you more about it here, enjoy!
Hi Josh, thanks for joining us! Could you tell us a bit about why you started Brevy?
Brevy is an independent, volunteer group of a few stubborn individuals who work on the project during our off hours (read “nights and weekends”). While my own day job is in science outreach, I work with a couple of other partners (a fantastic computer science start-up owner and a behavioural psychologist make up our merry band) to help direct and maintain the site. We’re nothing special on our own, so the real stars here are those that pitch a hand adding summaries to Brevy or introducing it as class assignments to help grow the body of content!
When did you first hear about Open Access and Open Science? What did you first think?
That would likely be during my undergraduate years studying biochemistry and becoming hopelessly frustrated trying to write reports using papers I often had no access to (even with our university library!). At the time, I thought the concepts as fanciful dreams, but thankfully here we are with open access a growing paradigm and various open science platforms blossoming around the web.
What do you think the biggest problem with the current scholarly publishing system is?
Meaningful publishing. By reasonable estimates, at least more than a 1,000,000 academic papers are published each year. These works are published on platforms known largely only to academics, and then only to that specific subset of academia. Publications on these platforms are not always accessible even to this select group and generally do not well support further dialogue or dissemination, with a surprisingly significant number going uncited. Taken pessimistically, this is tantamount to ejecting hundreds of thousands of new pieces of knowledge into the void each year.
We can be optimistic about this however! Taken optimistically, there are hundreds of thousands of possibly exciting and ground-breaking new ideas all of the time that most of us don’t know about! But to see it this way, to truly believe it, we have to start caring about the meaningfulness of research. We have to start thinking about different types of impacts than citation count and means of prestige other than the journal name. And we have to care what our work means to the world outside academia.
But did you know that anyone can review any article they want on ScienceOpen, and not just those from ScienceOpen Research? And perhaps more importantly, anyone can invite anyone else to review any article? That sounds an awful lot like the daytime job for Editors at traditional journals.. But with the power firmly in the hand of researchers and their communities. How cool is that?
It’s super easy to implement too. All you have to do is go to an article of choice, click the ‘Reviews’ button (Step 1), and then select the ‘Invite to Review’ button (Step 2). If you were feeling inclined, you could review the paper yourself too!
You can then simply select their ScienceOpen username (what, you don’t have one yet?!), or invite them by email (Step 3).
Recently, we’ve been running an ‘open science stars‘ series to highlight a range of great people from around the world working to advance open science practices. This week, we have something a little special for you. All previous interviews have been with students or researchers, but this story is from a physician in the United States Navy, Commander Jean-Paul Chretien! So sit back and enjoy the show.
Hi Commander Jean-Paul! For starters could you let us know a little about your background?
Thank you for interviewing me. Let me say first that throughout this interview I’m expressing my own views, not necessarily the official policy or position of the Department of Defense, Defense Health Agency, or US Government.
I’m a physician in the United States Navy, and my training is in public health, epidemiology, and informatics. I work on challenges at the intersection of health and national security, like infectious disease outbreaks and climate change.
I was drawn first to the military, before medicine, but I knew what life as a doctor is like because my parents are physicians. I wanted to be a military officer from a pretty young age. Service to country, the chance to lead, the adventure – all of that appealed to me. For college I went to the U.S. Naval Academy in Annapolis, Maryland, thinking maybe I would command a warship someday. But while I was there, studying international affairs and national security, I learned that some of the most pressing security challenges were health problems like HIV/AIDS, at the time. And I learned that in many battles and wars, diseases crippled military forces and civilian populations in war zones. Infections often caused more casualties than combat.
So I decided to go to medical school, but not to be a doctor practicing in a clinic. I wanted to be a doctor for populations, and bring medical knowledge to decisions that impact military service members, the broader American public, and, well, everyone.
When did you first hear about Open Access and Open Science? What were your first thoughts? Has there ever been a case where lack of access to information has seriously compromised your work?
When I was a student working on my MD and my PhD in epidemiology, I didn’t think about Open Access because access wasn’t a problem for me. Through my university, I could access just about any journal article I needed. But later, when I began my global health work in the U.S. military, I saw how access restrictions constrained biomedical research, patient care, and population health around the world.
At that time, I worked in a Department of Defense program that partners with dozens of countries to improve their capabilities for detecting and containing epidemics. I had collaborators around the world, public health personnel and researchers in countries with limited resources, who could not read about studies on the diseases that burden them. How can they join the global effort against infectious disease outbreaks if they can’t always access the most current and best research on those diseases?
When I began my global health work in the U.S. military, I saw how access restrictions constrained biomedical research, patient care, and population health around the world.
Then, what galvanized my commitment to open access and open science in general was the Ebola outbreak that began in West Africa in late 2013, and spread to Europe and the U.S. It’s waning now, but infections are still occurring. There have been more than 28,000 confirmed cases with around 11,000 deaths, by far the largest Ebola epidemic ever.
Continuing our ‘open science stars’ series, we’re happy to present Dr. Julien Colomb this week! Julien is a postdoc in Berlin, and we’ve been working together (well, Julien has tolerated my presence..) at Open Science meetups here, which he’s been using to build an active community over the last 10 months or so. He recently published a cool paper in PeerJ and built a new ScienceOpen Collection, so we asked for his thoughts and experience with Open Science!
Hi Julien! Thanks for joining us at the ScienceOpen blog. Could you start off by letting us know a bit about your background?
Hi John. My pleasure to be here. [We’ve known each other for a year and he still can’t spell my name..]
I have been interested in neurobiology since my high school time; I got to work with Drosophila during my Master’s thesis and could then not leave the field. I worked about 10 years on the neuroanatomy and behaviour in the fruit fly larvae and flies in Switzerland, Paris and Berlin. In 2013, I decided to stay in Berlin when the mentor of my second post-doc, Prof. Brembs, moved to Regensburg. In the last 3 years, I have been jumping between different jobs in Prof. Winter groups, I have been wandering in the startup community in Berlin (founding Drososhare GmbH), and trying to foster open science and open data. At the moment, I work half time at the Charite animal outcome core facility, while we work on getting a beta version of the Drososhare product (a platform to share transgenic Drosophila between scientists). I also run the Berlin Open Science Meetup.
The impact factor is academia’s worst nightmare. So much has been written about its flaws, both in calculation and application, that there is little point in reiterating the same tired points here (see here by Stephen Curry for a good starting point).
Recently, I was engaged in a conversation on Twitter (story of my life..), with the nice folks over at the Scholarly Kitchen and a few researchers. There was a lot of finger pointing, with the blame for impact factor abuse being aimed at researchers, at publishers, funders, Thomson Reuters, and basically any player in the whole scholarly communication environment.
As with most Twitter conversations, very little was achieved in the moderately heated back and forth about all this. What became clear though, or at least more so, is that despite what has been written about the detrimental effects of the impact factor in academia, they are still widely used: by publishers for advertising, by funders for assessment, by researchers for choosing where to submit their work. The list is endless. As such, there are no innocents in the impact factor game: all are culpable, and all need to take responsibility for its frustrating immortality.
The problem is cyclical if you think about it: publishers use the impact factor to appeal to researchers, researchers use the impact factor to justify their publishing decisions, and funders sit at the top of the triangle facilitating the whole thing. One ‘chef’ of the Kitchen piped in by saying that publishers recognise the problems, but still have to use it because it’s what researchers want. This sort of passive facilitation of a broken system helps no one, and is a simple way of failing to take partial responsibility for fundamental mis-use with a problematic metric, while acknowledging that it is a problem. The same is similar for academics.
(Note: these are just smaller snippets from a larger conversation)
What some of us did seem to agree on, in the end, or at least a point remains important, is that everyone in the scholarly communication ecosystem needs to take responsibility for, and action against, mis-use of the impact factor. Pointing fingers and dealing out blame solves nothing, and just alleviates accountability without changing anything, and worse, facilitating what is known to be a broken system.
So here are eight ways to kick that nasty habit! The impact factor is often referred to as an addiction for researchers, or a drug, so let’s play with that metaphor.
I remember my first peer review. An Editor for a well-respected Elsevier journal in Earth Sciences emailed me during the second year of my PhD, asking me to peer review a paper for them. I hadn’t published anything by this point of my PhD, and had received no formal training in how to peer review papers. I initially declined, but was pretty much coerced into doing it, despite my resignations. “It’ll be great training and experience”, I was told. Go on. Go on go on go on go on go on. In the end, I did the review, but got my supervisor to check it over to make sure I was fair, thorough, and constructive. I remember him saying “This is surprisingly good!”, and thinking ‘Thanks..’. But his response was more because it was my first peer review, without any training in how to do it, rather than anything to do with my ability as a scientist. And rightly so – why should I have been expected to do a good job of peer review at such an early stage in my career, and with no formal training?
I wonder then how many other PhD students are told the same, and thrown into the deep end. ‘Peer review for this journal and receive fame and glory. It doesn’t matter how well you do it, as long as you do it.’
I get the feeling that some researchers regard public, post-publication peer review as a non-rigorous, non-structured and poor alternative to traditional peer review. Much of this might be down to the view that there are no standards, and no control in a world of ‘open’.
This couldn’t be further from the truth.
At venues like ScienceOpen and F1000 Research, there is full Editorial control over peer review. The only difference is that there is an additional safe guard against fraud and abuse. In public peer reviews, the quality (and quantity) of the process is made explicit. Both the report and the identity of the reporter are made open. This type of system invites civility and community engagement, and lays the foundation for crediting referees. It also highlights an under-appreciated, overlooked, aspect of the work that scientists do to advance knowledge in the real world.
ScienceOpen Editor Dan Cook said “Personally, I think the public needs to know how hard scientists work to advance our understanding of the world. “
At ScienceOpen, the Editorial office plays two roles. First, the Editorial team for ScienceOpen Research performs all the basic standards checks to make sure that research published is at an appropriate scientific standard. They attempt to protect against pseudoscience, and ensure that the manuscript is prepared to undergo public scrutiny. Second, there are Collection Editors, who manage peer review, curation, and discussion about their own Collections.
Why is Editorial control so important?
For starters, without an Editor, peer review will never get done. Researchers are busy, easily distracted, and working on 1000 other things at once. Opting to go out into the world and randomly distribute your knowledge through peer review, while selfless, is actually quite a rare phenomenon.
Peer review needs structure, coordination, and control. In the same way as traditional peer review, this can be facilitated by an Editor.
But why should this imply a closed system? In a closed system, who is peer reviewing the Editors? What are editorial decisions based on? Why and who are Editors selecting as reviewers?
These are all questions that are obscured by traditional peer review, and traits of a closed, secretive, and subjective system – not the rigorous, objective, gold standard that we hold peer review to be.
At ScienceOpen, we recognise this dual need for Editorial standards combined with transparency. Transparency leads to accountability, which in turn lends itself to a less biased, more rigorous and civil process of peer review.
How does Editorial coordination work with Collections?
Collections are the perfect place to demonstrate and exercise editorial management. Collection Editors, of which there can be up to five per Collection, have the authority to manage the process of peer review, but out in the open.
They can do this by either externally inviting colleagues to review papers within the system, or if they already have a profile with us, then they can simply invite them to review specific papers, and referees will receive an invitation to peer review.
Quality control is facilitated through ORCID, as referees must have 5 items associated with their account in order to formally peer review. And to comment, all you need is an ORCID account, simples!
The major difference between a traditional Editor and a Collection Editor is selection. As a traditional Editor, you wield supreme power over what ultimately becomes published in the journal by deciding what gets rejected and what gets sent out to peer review. As a Collection Editor, you don’t reject anything – you filter from pre-existing content depending on your scope.