The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology
- Published
- Accepted
- Subject Areas
- Psychiatry and Psychology, Statistics
- Keywords
- Methodology, Replicability, Likert scales, Reanalysis
- Copyright
- © 2016 Brown et al.
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
- Cite this article
- 2016. The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. PeerJ Preprints 4:e2064v1 https://doi.org/10.7287/peerj.preprints.2064v1
Abstract
We present a simple mathematical technique that we call GRIM (Granularity-Related Inconsistency of Means) for verifying the summary statistics of published research reports in psychology. This technique evaluates whether the reported means of integer data such as Likert-type scales are consistent with the given sample size and number of items. We tested this technique with a sample of 260 recent articles in leading journals within empirical psychology. Of the subset of articles that were amenable to testing with the GRIM technique (N = 71), around half (N = 36; 50.7%) appeared to contain at least one reported mean inconsistent with the reported sample sizes and scale characteristics, and more than 20% (N = 16) contained multiple such inconsistencies. We requested the data sets corresponding to 21 of these articles, receiving positive responses in 9 cases. We were able to confirm the presence of at least one reporting error in all cases, with 2 articles requiring extensive corrections. The implications for the reliability and replicability of empirical psychology are discussed.
Author Comment
We are posting this article here in order to gather feedback from the research community prior to submitting it to a peer-reviewed journal.