Best practices for conducting benchmarking in the most comprehensive and reproducible way
1
Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States
2
Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, United States
3
Department of Computer Science and Department of Human Genetics, University of California, Los Angeles, Los Angeles, CA, United States
- Published
- Accepted
- Subject Areas
- Computational Biology, Genetics, Genomics, Computational Science
- Keywords
- reproducibility, computational genomics, Benchmarking
- Copyright
- © 2017 Mangul et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2017. Best practices for conducting benchmarking in the most comprehensive and reproducible way. PeerJ Preprints 5:e3236v1 https://doi.org/10.7287/peerj.preprints.3236v1
Abstract
Computational biology is rapidly advancing thanks to the many new tools developed and published each month. A systematic benchmarking practice would help biomedical researchers leverage this technological expansion to optimize their projects. Several aspects of algorithm publication and distribution contribute to this challenge. We address these challenges and present seven principles to guide researchers in designing a benchmarking study. Our proposed steps show how benchmarking can create a framework for comparison of newly published algorithms.
Author Comment
This paper is currently under review at a peer-reviewed publication.