Abstract

Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p-values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.

Keywords

PoolingMultilevel modelBayesian probabilityComputer scienceType I and type II errorsNominal levelMultiple comparisons problemHierarchical database modelStatisticsBayesian inferenceEconometricsMathematicsArtificial intelligenceData miningMachine learningConfidence interval

Affiliated Institutions

Related Publications

Publication Info

Year
2012
Type
article
Volume
5
Issue
2
Pages
189-211
Citations
1297
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1297
OpenAlex
37
Influential
738
CrossRef

Cite This

Andrew Gelman, Jennifer Hill, Masanao Yajima (2012). Why We (Usually) Don't Have to Worry About Multiple Comparisons. Journal of Research on Educational Effectiveness , 5 (2) , 189-211. https://doi.org/10.1080/19345747.2011.618213

Identifiers

DOI
10.1080/19345747.2011.618213
arXiv
0907.2478

Data Quality

Data completeness: 88%