@DrDominikVogel @ingorohlfing @Chengxin_M_Xu @Journal_BPA As I explain in https://t.co/HNyInPv1xY and https://t.co/2H8XIkaaf3, the pattern of p-values is influenced by many factors, especially in a heterogeneous set of studies like this. If there were only true effects, the curve should be steeper.
@MarkZobeck @ErikVanZwet @aidybarnett That's the rub - you are already grouping pub bias and researchers DoF together, and you assume you can say anything about them in this graph. I explain in this 2015 paper that this is very difficult: https://t.co/s2Rq4iX7wE.
@shilaan01 That's a real hornet's nest. People have tried to identify p-hacking. Here's a comment of mine explaining why that doesn't work: https://t.co/2H8XIkaaf3 and here's a paper explaining why it is practically impossible even with perfect data: https://t.co/s2Rq4iX7wE
@wuthrich_k @jhaushofer Very cool - you might be interested in some early criticism I wrote on papers using p-curves to conclude widespread bias, where I also pointed out it's not that easy: https://t.co/s2Rq4iX7wE and https://t.co/2H8XIkaaf3.
@profjmb See https://t.co/SlHZ1IKdZQ - if you want to make specific clsims you will need to model those and as my papers shows that is not easy. People widely misunderstand this.
@rubenarslan @RogertheGS @richarddmorey @ceptional @LeonidTiokhin Exactly. I probably wrote the best paper in the literature on the challenges of drawing conclusions of p-values just below .05 https://t.co/SlHZ1IKdZQ And I am still ok with using this as a general heuristic for non-prereg papers in journal we know commit massive publication bias
2019 resolution- rely less on p-values and more on persuasive science. Great read by @lakens - another reason we should rethink how we use p values when formulating hypothesis. Just because something isn’t<.05 doesn’t make it true, and vice versa. https://t.co/haj7VxcRnh