Comments on "Researcher bias: The use of machine learning in software defect prediction"

Graduate School of Information Science, Nara Institute of Science and Technology, Nara, Japan
Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec, Canada
School of Computing, Queen's University, Kingston, Ontario, Canada
DOI
10.7287/peerj.preprints.1260v2
Subject Areas
Data Mining and Machine Learning, Data Science, Software Engineering
Keywords
Software Engineering, Software Quality Assurance, Software Defect Prediction, Machine Learning, Researcher Bias, Collinearity, Multi-collinearity
Copyright
© 2016 Tantithamthavorn et al.
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Tantithamthavorn C, McIntosh S, Hassan AE, Matsumoto K. 2016. Comments on "Researcher bias: The use of machine learning in software defect prediction" PeerJ Preprints 4:e1260v2

Abstract

Shepperd et al. find that the reported performance of a defect prediction model shares a strong relationship with the group of researchers who construct the models. In this paper, we perform an alternative investigation of Shepperd et al.’s data. We observe that (a) research group shares a strong association with other explanatory variables (i.e., the dataset and metric families that are used to build a model); (b) the strong association among these explanatory variables makes it difficult to discern the impact of the research group on model performance; and (c) after mitigating the impact of this strong association, we find that the research group has a smaller impact than the metric family. These observations lead us to conclude that the relationship between the researcher group and the performance of a defect prediction model are more likely due to the tendency of researchers to reuse experimental components (e.g., datasets and metrics). We recommend that researchers experiment with a broader selection of datasets and metrics to combat any potential bias in their results.

Author Comment

This article has been accepted for publication in IEEE Transactions on Software Engineering but has not yet been fully edited.