
ScienceOpen Author Interview Series – Daniel Graziotin
Today’s interview comes from another recent ScienceOpen author, Daniel Graziotin. In his ScienceOpen article, “Green open access in computer science – an exploratory study on author-based self-archiving awareness, practice, and inhibitors,” he analyses the results of an exploratory questionnaire given to Computer Scientists. It addressed issues around various forms of academic publishing, self-archiving of research, and copyright. In the following interview with ScienceOpen, Graziotin gives a unique perspective, as a young scientist and as a software and web developer coming to scientific publishing from the world of open software development. His ideas are bound to be interesting to emerging scientists in particular, as he represents a new globally engaged generation of scientific researchers making full use of open knowledge to further innovation.
Daniel Graziotin is a PhD student at the Free University of Bozen-Bolzano, Italy studying Computer Science with a focus on human aspects in software engineering. He has already published extensively and maintains membership in the ACM, SIGSOFT, IEEE and IEEE Computer Society, as well as serving as Editorial Associate with the Journal of Open Research Software. He is also the local coordinator of the Italian Open science local group (http://openscience.it)
Q. As a PhD student, how did you discover open access and what do you think about it?
Although I try to stay skeptical, like any good scientist should be, I am biased when it comes to open access publishing and open science in general. Before starting my research activities, I grew up in open source software development and advocacy. From open source software development, its communities, initiatives, and Linux User Groups, I found evidence to believe that openness should form the basis of innovation. From the start of my research activities, I took the parallels between open source software and scientific research for granted. I thought that every research artifact was freely available. After all, it was science: every scientist should be able to access the produced knowledge, evaluate it, and build upon it. Just think about the famous Linus’s Law (http://en.wikipedia.org/wiki/Linus’s_Law) “given enough eyeballs, all bugs are shallow.” To me, that applies to science, too.
I was deeply disappointed to learn that science is closed. Even worse, several junior and senior colleagues were not aware of that. Luckily, I quickly discovered that there are initiatives like open access publishing and open science in general. These initiatives are supported by talented advocates and organizations, like the Open Knowledge Foundation. As somebody naturally born in the openness, I started to do everything I could do to improve the situation. My life-slogan is “I believe that knowledge sharing is the key towards a better world. In order for this to happen, knowledge must be free.” It is stated on the homepage of my website, http://ineed.coffee. I committed to make 100% of my published research Open Access (green or gold) (https://impactstory.org/DanielGraziotin), so that anybody in the world can read it for free. I began exploring open access journals and amazing open science projects like figshare, where I like to free up all my presentations and as many datasets as I can (http://figshare.com/authors/Daniel_Graziotin/396749). This was so different than traditional, too often boring academic journals. People could learn something from not necessarily peer reviewed artifacts. I could also learn from other people’s work. Amazingly beautiful.
As soon as I discovered initiatives such as the Journal of Open Research Software, where open access papers about open source software for research are peer reviewed and published, I began convincing my supervisors Pekka Abrahamsson and Xiaofeng Wang to just “give a try” to open access and open science. They have been supportive from day 1, and I am very thankful to them. Given the positive experience (described in this article https://thewinnower.com/papers/an-author-based-review-of-the-journal-of-open-research-software), I kept pushing to publish research artifacts in the openness. I discovered other innovative open access journals like PeerJ, which has a terrific business and review models (pre-publication peer review, which can optionally become public and open). Therefore, I learned that science can be exciting, and that innovation in science is indeed possible. It is also becoming clear to me that innovation in science is embracing the openness. This is exciting: the journey is still long, but openness will definitely become a standard in science. It is up to us to accelerate this process.
Newcomers such as The Winnower and ScienceOpen are another step forward. Both of them are embracing post-publication, open peer review. The Winnower aims to innovate extremely, e.g., publishing also academic blog posts. However, given its experimental nature, we do not really know which path it will undertake (still, I wish the journal a great success and I will do my best in order to improve it).
Q. Why did you decide to publish in ScienceOpen?
It appears that ScienceOpen has chosen its path, and it is traversing it. It is innovative because it aims to be a platform for nearly the whole research process. First, it collects millions of papers published in open access venues. Researchers will be able to virtually meet in ScienceOpen, maybe while discussing the same paper or similar papers, maybe while discussing research topics on its forums (groups). The discussions could eventually lead to a research proposal. As far as I understand, the platform will also let researchers collaboratively write papers. The manuscript can then be submitted to any journal. However, ScienceOpen acts as a publisher for the ScienceOpen Research journal. The journal is published in the same platform that collects the other papers. I like that the journal is not confining itself to certain disciplines. Interesting journals like PeerJ have chosen to publish only papers from certain disciplines. It is sometimes difficult for researchers from minor fields to find good open access journals. This holds true also for some megajournals. ScienceOpen Research has chosen to accept papers coming from virtually any discipline. While it might be challenging in terms of reaching a critical mass, I think that this strategy will boost the adoption of open access in those disciplines (and fields) not fully covered. As reported in the paper I have just published in ScienceOpen Research, the path to open access publishing is still long in several disciplines.
Q. Can you describe the research you’ve just published with us at ScienceOpen a little?
The journal article is a modest exploratory study I performed in November 2013. Its story is quite unusual. As a researcher, I like to wear the scientist’s hat when observing what I am passionate about. While my PhD studies are about human aspects of software development, I dedicate “my 20% time” (as in http://googleblog.blogspot.it/2006/05/googles-20-percent-time-in-action.html) trying to understand open access and open science in my disciplines. Together with my supervisors, we performed a systematic analysis of the open access journals in our fields. The results showed that the majority of the journals presented several issues. They were unknown, lacked transparency, did not offer the archival of articles, shaded their review process and publication ethics, and asked for too large article processing charges that were completely unjustified by the features offered. We (I in particular) were no longer surprised to sense reluctance from our colleagues when talking about open access publishing. I also wondered about the state of green open access in computer science. My experience when mentioning self-archiving to colleagues, visiting academics, and authors at conferences has been miserable. Most authors lacked knowledge of self-archiving allowance, or reported to host the publisher’s PDF in personal websites without understanding that it was not legal. Often, the authors did not understand the rights kept or given away when signing copyright transfer agreements. As a PhD student, I wanted to understand more.
Therefore, with the blessing of my supervisors, I designed a small survey to be administered to my faculty, regarding the awareness of self-archiving, its practice, and the inhibitors of self-archiving. I was particularly interested in the latter one, because whenever we know what prevents something from happening, we can work to limit those threats, and make the thing happen. I designed the questionnaire, trying to minimize the number of questions and the time needed to fill it. The reason is that I know very well how little time academics have.
I presented the intermediary results of the study at a conference related to Free Software, the SFSCon 2013 (https://www.sfscon.it/talks/green-open-access-2). Despite the fact that the participants there were mostly software developers and hackers, the talk was very well received. The participants offered interesting suggestions. This pushed me to further analyze the data, especially the open-ended answers, with a critical eye and a systematic approach, in order to offer the results to academia.
The analysis of the results offered some interesting propositions regarding self-archiving in computer science. For example, we expect that 60% of the researchers in computer science know what self-archiving is. However, 80% of them are expected to never or rarely perform self-archiving of preprints (70% regarding postprints). On the other hand, we expect that 45% of the researchers in computer science, at various levels of frequency, do not respect the copyright transfer agreements they sign. We also expect that the major factors inhibiting self-archiving in computer science are lack of automation mechanisms and tools, the time required for self-archiving, and the unawareness of the self-archiving practice itself.
While the study is modest and exploratory by its nature, this first evidence hints clearly that in computer science there is still quite some work to be done in order to foster self-archiving among authors. The paper provides several recommendations. However, more studies with bigger sample sizes are needed. My hope with this study is to offer initial evidence towards hypotheses, and a measurement instrument that should be extended but kept into minimum in order to ensure high response rates. Lastly, the study carries the “hidden” message that advocates in open access should not make the mistake to think that open access has won its war. To me, a victory is possible but there is still a long way up to there.