Abstract

Results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Non-randomised studies may still give seriously misleading results when treated and control groups appear similar in key prognostic factors. Standard methods of case-mix adjustment do not guarantee removal of bias. Residual confounding may be high even when good prognostic data are available, and in some situations adjusted results may appear more biased than unadjusted results. Although many quality assessment tools exist and have been used for appraising non-randomised studies, most omit key quality domains. Healthcare policies based upon non-randomised studies or systematic reviews of non-randomised studies may need re-evaluation if the uncertainty in the true evidence base was not fully appreciated when policies were made. The inability of case-mix adjustment methods to compensate for selection bias and our inability to identify non-randomised studies that are free of selection bias indicate that non-randomised studies should only be undertaken when RCTs are infeasible or unethical. Recommendations for further research include: applying the resampling methodology in other clinical areas to ascertain whether the biases described are typical; developing or refining existing quality assessment tools for non-randomised studies; investigating how quality assessments of non-randomised studies can be incorporated into reviews and the implications of individual quality features for interpretation of a review's results; examination of the reasons for the apparent failure of case-mix adjustment methods; and further evaluation of the role of the propensity score.

Keywords

MedicineSystematic reviewRandomized controlled trialPsychological interventionBlindingMEDLINEMeta-analysisInternal validityPhysical therapySurgeryPsychiatryPathology

Related Publications

Publication Info

Year
2003
Type
review
Volume
7
Issue
27
Pages
iii-x, 1
Citations
2872
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2872
OpenAlex

Cite This

Jonathan J Deeks, Jacqueline Dinnes, Roberto D’Amico et al. (2003). Evaluating non-randomised intervention studies. Health Technology Assessment , 7 (27) , iii-x, 1. https://doi.org/10.3310/hta7270

Identifiers

DOI
10.3310/hta7270