This study involves healthy volunteers and not a rare patient population. A sample size of 13, which drops to 12 for skin conductance, seems odd. The experiment presumably lacks statistical power and the paper doesn’t justify or discuss this glaring limitation.
Why was this not challenged by reviewers?
PeerJ needs to consider these issues very carefully if it is trying to encourage scientists to submit their best work for publication.
It is important to know that the key limitation to these smaller sample studies is that you are limited to seeing big differences or big effects. I think the sample size is not the problem. As long as we use appropriate statistical methods, we can make acceptable inference. Of course, if the authors can replicate the findings with an independent sample, the study would be more acceptable. However, the replication issue also exists in large-sample-size studies. In addition, if you are involved in large-sample-size studies, you may come across another problem: small effect size. In sum, I think the sample size is not a problem here. With an independent sample, the work would be better.
- Xiang-Zhen Kong •Are the statistical methods employed above really appropriate? It’s almost impossible to tell without a careful consideration of power alongside parametric assumptions. Citing previous research to justify a small sample size doesn’t mitigate what remains a problem for this paper and many others across psychology.
A lack of power in the first instance not only leads to problems detecting effects that may exist, but also results in lower precision in estimates and systematically inflated effect sizes (if actually reported). Secondly, while these problems do not disappear with larger samples, the resulting data is far more likely to at least meet the basic assumptions required for parametric testing (e.g. normality).
- David Ellis •