Technical note: how to rationally compare the performances of different machine learning models?

Engineering, Nagoya University, Nagoya, Japan
DOI
10.7287/peerj.preprints.26714v1
Subject Areas
Artificial Intelligence, Data Mining and Machine Learning, Data Science
Keywords
Machine learning, comparison, testing, training
Copyright
© 2018 Maeda
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
Cite this article
Maeda T. 2018. Technical note: how to rationally compare the performances of different machine learning models? PeerJ Preprints 6:e26714v1

Abstract

Nowadays, there is a large number of machine learning models that could be used for various areas. However, different research targets are usually sensitive to the type of models. For a specific prediction target, the predictive accuracy of a machine learning model is always dependent to the data feature, data size and the intrinsic relationship between inputs and outputs. Therefore, for a specific data group and a fixed prediction mission, how to rationally compare the predictive accuracy of different machine learning model is a big question. In this brief note, we show how should we compare the performances of different machine models by raising some typical examples.

Author Comment

under review