Sensitivity and specificity of information criteria

The Methodology Center, The Pennsylvania State University, State College, PA, USA
Department of Statistics, The Pennsylvania State University, State College, PA, USA
DOI
10.7287/peerj.preprints.1103v1
Subject Areas
Statistics
Keywords
AIC, BIC, Sensitivity, Specificity, CAIC, AICc, Model selection, Variable selection, Likelihood ratio test, Latent class analysis
Copyright
© 2015 Dziak et al.
Licence
This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ PrePrints) and either DOI or URL of the article must be cited.
Cite this article
Dziak JJ, Coffman DL, Lanza ST, Li R. 2015. Sensitivity and specificity of information criteria. PeerJ PrePrints 3:e1103v1

Abstract

Choosing a model with too few parameters can involve making unrealistically simple assumptions and lead to high bias, poor prediction, and missed opportunities for insight. Such models are not flexible enough to describe the sample or the population well. A model with too many parameters can t the observed data very well, but be too closely tailored to it. Such models may generalize poorly. Penalized-likelihood information criteria, such as Akaike's Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Consistent AIC, and the Adjusted BIC, are widely used for model selection. However, different criteria sometimes support different models, leading to uncertainty about which criterion is the most trustworthy. In some simple cases the comparison of two models using information criteria can be viewed as equivalent to a likelihood ratio test, with the different models representing different alpha levels (i.e., different emphases on sensitivity or specificity; Lin & Dayton 1997). This perspective may lead to insights about how to interpret the criteria in less simple situations. For example, AIC or BIC could be preferable, depending on sample size and on the relative importance one assigns to sensitivity versus specificity. Understanding the differences among the criteria may make it easier to compare their results and to use them to make informed decisions.

Author Comment

This report has also been released as Methodology Center Technical Report 12-119 (The Methodology Center, The Pennsylvania State University).