One of the smallest incremental steps to address statistical issues of replicability, and at the same time a most urgent step, is to remove thresholds of statistical significance like p = 0.05. https://t.co/7vnYDeLVss
@Quasilocal @OliverFaude @Neuro_Skeptic @DFanDaBiasedMan "The average truth often does not make it to the paper and the public, and much of our attention is attracted by exaggerated results." https://t.co/tiUcLt2aBI https://t.co/Vpmi0Btvd3
@SGruninger 3/ However, as we explain in the attached paper, applying significance thresholds makes cumulative knowledge unreliable. I think that applying any sort of "inferential thresholds" on single studies introduces some sort of bias. https://t.co/Q167s9yTBH
It has been argued that the correct use of p-values is to guide behavior, or suggest a direction of research, not to classify results.
“A statistically significant result may not be easy to reproduce.”
https://t.co/hzTee0hMqQ
My 'The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research' article was published 1 year ago today in #OpenAccess journal @thePeerJ https://t.co/Q167s9yTBH
Crisis de replicación o valores de significación que apenas significan nada. "los valores-p deben interpretarse como medidas graduales de solidez de la evidencia frente a la hipótesis nula" Ronald Fisher. #Psicología #Ciencia #Conocimiento https://t.co/IcTX5spz2C vía @thePeerJ
@HarryDCrane Arbitrary thresholds are arguably part of the reproducibility problem. For instance this article: [I am not an author of this article] https://t.co/Sl4FvxlMBF