@keholsinger @DanielBolnick @StatModeling There are just so many Gelman papers one could cite! We cited some more, for example, here: https://t.co/y6ZeHs4xSm
@paulrconnor I like the idea of considering p-values as "graded levels of strength of evidence against the null" (https://t.co/1plLVjXzK1). I've previously used the term "weak evidence" to describe 0.05 <p<0.1, and tried to discuss accordingly.
@BenSlaterNeuro Something along the lines is here :-) https://t.co/y6ZeHs4xSm (I actually wrote it for my students). Always worth a look is https://t.co/sf78ButBku.
Both are not psychology of course.
@DevoEvoMed @_julien_roux Not sure I understand what you mean – our obsession with null hypothesis significance testing has been criticized for a century and by many, many statisticians. See, e.g., https://t.co/3fZGjQJSUj
https://t.co/y6ZeHs4xSm
https://t.co/NAlQGWg6I1
@MaartenvSmeden @dnunan79 ... it is less likely that all studies will suffer from the same type of bias; consequently, their composite picture may be more informative than the result of a single large trial”
https://t.co/MceA0G9J1g
@PabloRichly @norabar Pablo, if I may reply in English: statistical significance cannot rule out that a result is a chance event, and a P-value alone says nothing about effect size and clinical importance. See, e.g., this figure: https://t.co/f7WmiHXnM8
@badamczewski01 @Scooletz @konradkokosa @SitnikAdam I do not recommend statistical tests for perf measurements
1) It has tons of problems that can easily mislead you if you don't have enough experience. It worth reading https://t.co/C6RqfCVvzU and https://t.co/B6XcLahdBb
To emphasize how the statistics are fragile: one less symptomatic patient in the Day1 group and the result would have been significant according to the arbitrary rules of medical research statistics.
One should be careful with p-value tyranny.
https://t.co/HaGtJ42Ao5
@JeremyAnso @Zulu18360299 @philippefroguel Bêtement interpréter des données de façon dichotomique en fonction du test de Fisher peut justement conduire à ce genre d'abbération. Lecture très instructive et pertinente à mon avis:
https://t.co/HaGtJ42Ao5
@LinAung26696501 "In a series of studies, it is less likely that all studies will suffer from the same type of bias; consequently, their composite picture may be more informative than the result of a single large trial." Of course, all of those studies should be published. https://t.co/MceA0G9J1g
@lobrowR @cassie_olmsted @jmheberling @spiderdayNight @BrandonHoenig section "Uncertainty, probability, and statistical significance" here https://t.co/AasgDdORta and the last bit of this paper here for advice https://t.co/tHHCzVd6UI
Important read for students & experts alike: "...mistrust in nonsignificant results that leads to publication bias is caused by confusion about interpretation of larger p-values that goes back to historical disputes among the founders of modern statistics" https://t.co/gj1APCJSYD