Misunderestimating the significance of p-values

In (biological) science there is an expectation that any quantitative evidence should be accompanied with a statistical significance metric such as a p-value or error bars. These values are then deemed to be significant if they fall within certain confidence intervals. e.g. 99.9% confidence interval for a p-value of 0.01. I personally believe p-values should only be considered significant at extremely high confidence intervals with 99.9% a bare minimum, ideally 99.999% or higher. But this obviously depends on the experiment and sample size.

Recently there have been two excellent papers addressing the issue of p-value misuse. One late last year by Regina Nuzzo in PNAS [DOI] and the other this week by Valen Johnson in Nature [DOI]. Both paper discuss the affect the p-value has on data (and conclusion) reproducibility.

This is a difficult issue to address as the current p-value usage is so ingrained in the scientific community. I see the only effective way of addressing this is through journals requiring p-values be used correctly accompanied with all the data provided open access.

Leave a Reply

Your email address will not be published. Required fields are marked *