@lewis_halsey @RSocPublishing Thanks, important paper! Of course all those alternatives could be misused, in the same way as P-values, for making "yes" or "no" decisions from single studies. Also model selection based on delta AIC thresholds will lead to inflated effect sizes. https://t.co/PvNvK2WOu2
"Surely, God loves the .06 nearly as much as the .05” (Rosnow & Rosenthal, 1989). La littérature sur les tests statistiques d'hypothèses est parfois merveilleuse, contrairement aux idées reçues. Lisez : https://t.co/HUlOaC1uEG
First, consider reading https://t.co/VqDMi6T2UE and then consider subscribe the comment. Researchers are capable to more profound discussion about science rather than p-value based binary decisions! https://t.co/LOemTrhzj6
Selective reporting was encouraged since Fisher (1937): "it is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard." https://t.co/bWu0iag4pw
@PieterHog @robustgar @lakens @NeuroStats @learnfromerror The Neyman–Pearson decision procedure was particularly suitable for industrial quality control, or "sampling tests laid down in commercial specifications" (Neyman & Pearson 1933). https://t.co/FVD5GmX5Eo