A few months ago, we published “On the reproducibility of science: unique identification of research resources in the biomedical literature”, an important article which received good attention from the community. As you can see, this article has already been cited several times, it has been viewed over 4,500 times, and the PDF has been downloaded 1,600 times, indicating that this study is already making a significant impact in the field.
We felt it would be informative to ask the authors to comment on the impact of this work. Nicole Vasilevsky and Melissa Haendel (respectively first and last author) do their research in the Ontology Development Group, at Oregon Health and Science University.
PJ: What was the reception of your peers to this publication, and would you say that this publication has influenced others?
NV & MH: The paper was very well received and the reception exceeded our expectations. For example, we were cited by The Economist and in a Nature Commentary by Francis Collins on reproducibility, and recently in a Scientific American blog post. The idea of scientific reproducibility is a hot topic, and this experiment provided quantitative evidence of the fundamental problem of the lack of unique identifiers for research resources. It is an easy problem to address, but requires a cultural shift in the way we report on our science.
We believe that our work has in fact influenced others. We have attended a number of conferences lately where there has been a significant focus on capturing information pre-publication so as to ensure adequate reproducibility, better scholarly communication and linking amongst different online content, and decreased load on the curators that need to follow up with authors for this information. It is very nice to see such a diverse community of authors, researchers, editors, publishers, data stewards, and computer scientists all working together to help solve the problem of needing structured metadata recorded as part of the publication process. We believe that a cultural shift in the way that we report on science in the published literature is actually happening and it is rewarding to be a part of efforts such as FORCE11 to enable this change.
PJ: How has your research progressed since the publication?
As an outcome of our study, we launched the Resource Identification Initiative with Force11, collaborators from UC San Diego, and sponsorship from the International Neuroscience Coordinating Facility and the NIH, as well as academic, government and non-government institute officials, publishers, and commercial antibody companies. This initiative aims to enable resource identification within the biomedical literature through a pilot study promoting the use of unique Research Resource Identifiers (RRIDs). In the pilot study, authors are asked to include RRIDs in their manuscripts prior to publication for three resource types: antibodies, model organisms, and tools (including software and databases). RRIDs meet three key criteria: they are machine readable, free to generate and access, and are consistent across publishers and journals. The intention is to provide a central, permanent resource for journal submission systems to access for authoritative information and conversely link to the various nomenclature and data authorities.
Additionally, we are working with Science Exchange on the Reproducibility Project: Cancer Biology. They are working towards reproducing the key experiments in independent labs for 50 of the most influential cancer biology papers from recent years. As a follow up analysis to our original study, we analyzed the same five resources types in the 50 cancer biology papers. The results were surprisingly similar. Overall, the resources were 87% identifiable, however, this was due to a large number of siRNA knockdown reagents that were included in one paper (over 3,000 knockdown reagents were used, which all had references to unique identifiers). Excluding that particular dataset, the average identifiable resources were 40%. The averages for each resource type were quite similar, for example, a similar number of antibodies were identifiable compared to the original study (approximately 40% were identifiable in both of the studies). The final analysis and original dataset is shared in FigShare and linked to the PeerJ paper.
PJ: In your opinion, why do you think this article has been highly cited and downloaded?
The topic of research reproducibility is an area of great interest right now amongst the scientific community. There are two sides to this coin, one is that a great deal of time and money is spent performing experiments, and supporting reproducibility actually aids the efficiency of science in reusing other’s work. The other side is that there is greater accountability and more and more retractions. The degree to which the scientific literature should be reproducible is the subject of much debate (for example, see this post and this article).
Join Nicole Vasilevsky, Melissa Haendel and thousands of other satisfied authors, and submit your next article to PeerJ.