Just enter your email
The letter has much to commend it, as does the intent of the original ASA and Nature papers. I concur with moving away from a dichotomous approach around a somewhat arbitrary cut-off. However a p-value can be viewed as a standardisation technique, that takes evidence (estimates, including of variability) and under distributional assumptions maps the evidence to a 0-1 scale. A key reason to standarise is to facilatate comparison and it would be remiss in my view to not provide reference points (or intervals) that create a common understanding as to the strength of evidence. That does not mean that researchers need to be a slave to such reference points (0.049 is no different to 0.051), but the anchoring of evidence provides a framework to create a common understanding. A recent proposal to move to a 0.005 threshold was in my view misguided, but 0.005 is certainly a "reference point" that, ceteris paribus, provides stronger evidence than 0.05 (and 0.01). The authors are right to encourage many of the best practices that they propose, and my comment is not aimed at diminishing that. However standardisation without a series of reference points strikes me as potentially unhelpful to the broader research community with the inevitable consequence that a variation therefor will subsequently be brought back. "Thresholds to pass" should indeed be consigned to the past, but reference points to anchor the evidence remain much needed.
Andrew Garrett, Kew, London, UK