Key steps to avoiding artistry with significance tests
- Published
- Accepted
- Subject Areas
- Ecology, Evidence Based Medicine, Oncology, Statistics, Environmental Impacts
- Keywords
- null hypothesis, significance testing, model fitting, frequentist statistics, p-values
- Copyright
- © 2017 Doncaster et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2017. Key steps to avoiding artistry with significance tests. PeerJ Preprints 5:e3394v1 https://doi.org/10.7287/peerj.preprints.3394v1
Abstract
Statistical significance provides evidence for or against an explanation of a population of interest, not a description of data sampled from the population. This simple distinction gets ignored in hundreds of thousands of research publications yearly, which confuse statistical with biological significance by referring to hypothesis-testing analyses as demonstrating significant results. Here we identify three key steps to objective reporting of evidence-based analyses. Firstly, by interpreting P -values correctly as explanation not description, authors set their inferences in the context of the design of the study and its purpose to test for effects of biologically relevant size; nowhere in this process is it informative to use the word ‘significant’. Secondly, empirical effect sizes demand interpretation with respect to a size of relevance to the test hypothesis. Thirdly, even without an a priori expectation of biological relevance, authors can and should interpret significance tests with respect to effects of reliably detectable size.
Author Comment
This is a preprint submission to PeerJ Preprints.