Abstract

Traditional methods of analyzing data from psychological experiments are based on the assumption that there is a single random factor (normally participants) to which generalization is sought. However, many studies involve at least two random factors (e.g., participants and the targets to which they respond, such as words, pictures, or individuals). The application of traditional analytic methods to the data from such studies can result in serious bias in testing experimental effects. In this review, we develop a comprehensive typology of designs involving two random factors, which may be either crossed or nested, and one fixed factor, condition. We present appropriate linear mixed models for all designs and develop effect size measures. We provide the tools for power estimation for all designs. We then discuss issues of design choice, highlighting power and feasibility considerations. Our goal is to encourage appropriate analytic methods that produce replicable results for studies involving new samples of both participants and targets.

Keywords

GeneralizationPsychologyTypologyRandom effects modelStatistical powerPower (physics)Computer scienceEconometricsFactor (programming language)StatisticsMachine learningCognitive psychologyMathematicsMeta-analysis

MeSH Terms

HumansModelsStatisticalPsychologyResearch Design

Affiliated Institutions

Related Publications

Publication Info

Year
2016
Type
review
Volume
68
Issue
1
Pages
601-625
Citations
458
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

458
OpenAlex
23
Influential
313
CrossRef

Cite This

Charles M. Judd, Jacob Westfall, David A. Kenny (2016). Experiments with More Than One Random Factor: Designs, Analytic Models, and Statistical Power. Annual Review of Psychology , 68 (1) , 601-625. https://doi.org/10.1146/annurev-psych-122414-033702

Identifiers

DOI
10.1146/annurev-psych-122414-033702
PMID
27687116

Data Quality

Data completeness: 86%