Abstract

The findings of medical research are often met with considerable scepticism, even when they have apparently come from studies with sound methodologies that have been subjected to appropriate statistical analysis. This is perhaps particularly the case with respect to epidemiological findings that suggest that some aspect of everyday life is bad for people. Indeed, one recent popular history, the medical journalist James Le Fanu's The Rise and Fall of Modern Medicine , went so far as to suggest that the solution to medicine's ills would be the closure of all departments of epidemiology.1 One contributory factor is that the medical literature shows a strong tendency to accentuate the positive; positive outcomes are more likely to be reported than null results.2–4 By this means alone a host of purely chance findings will be published, as by conventional reasoning examining 20 associations will produce one result that is “significant at P=0.05” by chance alone. If only positive findings are published then they may be mistakenly considered to be of importance rather than being the necessary chance results produced by the application of criteria for meaningfulness based on statistical significance. As many studies contain long questionnaires collecting information on hundreds of variables, and measure a wide range of potential outcomes, several false positive findings are virtually guaranteed. The high volume and often contradictory nature5 of medical research findings, however, is not only because of publication bias. A more fundamental problem is the widespread misunderstanding of the nature of statistical significance. #### Summary points P values, or significance levels, measure the strength of the evidence against the null hypothesis; the smaller the P value, the stronger the evidence against the null hypothesis An arbitrary division of results, into “significant” or “non-significant” according to the P value, was not the intention of the …

Keywords

PsychologyStatistical hypothesis testingStatisticsMathematics

Affiliated Institutions

Related Publications

A Direct Approach to False Discovery Rates

Summary Multiple-hypothesis testing involves guarding against much more complicated errors than single-hypothesis testing. Whereas we typically control the type I error rate for...

2002 Journal of the Royal Statistical Soci... 5607 citations

Publication Info

Year
2001
Type
review
Volume
322
Issue
7280
Pages
226-231
Citations
1457
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1457
OpenAlex

Cite This

Jonathan A C Sterne (2001). Sifting the evidence---what's wrong with significance tests? Another comment on the role of statistical methods. BMJ , 322 (7280) , 226-231. https://doi.org/10.1136/bmj.322.7280.226

Identifiers

DOI
10.1136/bmj.322.7280.226