A significance test often does not make a clear statement about an effect, but instead it "examines if the sample size is large enough to detect the effect" (Werner Stahel 2016). https://t.co/HPbcFlNKmq
1/3 We should interpret larger p-values as perhaps less convincing but generally positive evidence against the model incl. the null hypothesis, instead of evidence that is necessarily negative or uninterpretable or only shows we did not collect enough data.https://t.co/8AOv1WfH9K
@shravanvasishth Yes, but that's not a published *original* claim, no? I collected some references to similar surveys on false beliefs here: https://t.co/zHOE8rq0HD
@VPrasadMDMPH @f_g_zampieri Whoever wants to deemphasize "trends" should deemphasize "significance" in general and present p-values as continuous indices of compatibility between the data and the model. See https://t.co/PnRCHpMwzE and https://t.co/20zlyHGYQe.
@ian_soboroff @SolomonMg Let's not give up the fight! In addition, I will try to drop "confidence" and call it compatibility interval. See https://t.co/y6ZeHs4xSm and https://t.co/VNayXZikC9
@MaartenvSmeden I don't like trends, but I like that trends are increasing. This shows that dichotomania is gradually decreasing, and that people are prepared to interpret p-values as continuous indices of compatibility between the data and the model. https://t.co/Fx2CEbkpps
... selecting the variables to be measured, in determining the data sampling scheme, the statistical model, the test statistic, how to verify whether model assumptions are met, how to handle outliers, how to transform the data, which software to use. https://t.co/ARnvLhn25f