Abstract
Randomized controlled clinical trials are conducted to determine whether differences of clinical importance exist between selected treatment regimens. When statistical analysis of the study data finds a P value greater than 5%, it is convention to deem the assessed difference nonsignificant. Just because convention dictates that such study findings be termed nonsignificant, or negative, however, it does not necessarily follow that the study found nothing of clinical importance. Subject samples used in controlled trials tend to be too small. The studies therefore lack the necessary power to detect real, and clinically worthwhile, differences in treatment. Freiman et al. found that only 30% of a sample of 71 trials published in the New England Journal of Medicine in 1978-79 with a P value greater than 10% were large enough to have a 90% chance of detecting even a 50% difference in the effectiveness of the treatments being compared, and they found no improvement in a similar sample of trials published in 1988. It is therefore wrong and unwise to interpret so many negative trials as providing evidence of the ineffectiveness of new treatments. One must instead seriously question whether the absence of evidence is a valid justification for inaction. Efforts must be made to look for quantification of an association rather than just a P value, especially when the risks under investigation are small. The authors cite a recent trial comparing octreotide and sclerotherapy in patients with variceal bleeding, as well as the overview of clinical trials evaluating fibrinolytic treatment for preventing reinfarction after acute myocardial infarction as examples.
Keywords
MeSH Terms
Affiliated Institutions
Related Publications
Evaluating surrogate markers.
We discuss approaches for efficiently evaluating potential surrogate markers; in particular, we focus on case-cohort designs in which marker evaluation is undertaken only for a ...
Quality of Reporting of Clinical Trials of Dogs and Cats and Associations with Treatment Effects
Background: To address concerns about the quality of reporting of randomized controlled trials, and the potential for biased treatment effects in poorly reported trials, medical...
How study design affects outcomes in comparisons of therapy. I: Medical
Abstract We analysed 113 reports published in 1980 in a sample of medical journals to relate features of study design to the magnitude of gains attributed to new therapies over ...
Importance of Surrogate Markers in Evaluation of Antiviral Therapy for HIV Infection
CONVENTIONAL randomized clinical trials of new treatments usually rely on clinical end points that directly show the effect of the new treatment on outcomes such as quality or l...
A Comparison of Observational Studies and Randomized, Controlled Trials
We found little evidence that estimates of treatment effects in observational studies reported after 1984 are either consistently larger than or qualitatively different from tho...
Publication Info
- Year
- 1995
- Type
- article
- Volume
- 311
- Issue
- 7003
- Pages
- 485-485
- Citations
- 1740
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1136/bmj.311.7003.485
- PMID
- 7647644
- PMCID
- PMC2550545