Abstract
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Keywords
MeSH Terms
Affiliated Institutions
Related Publications
Experiments with More Than One Random Factor: Designs, Analytic Models, and Statistical Power
Traditional methods of analyzing data from psychological experiments are based on the assumption that there is a single random factor (normally participants) to which generaliza...
Power Analysis and Effect Size in Mixed Effects Models: A Tutorial
In psychology, attempts to replicate published findings are less successful than expected. For properly powered studies replication rate should be around 80%, whereas in practic...
Modeling stimulus variation in three common implicit attitude tasks
We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the stat...
Sample Size Planning for Statistical Power and Accuracy in Parameter Estimation
This review examines recent advances in sample size planning, not only from the perspective of an individual researcher, but also with regard to the goal of developing cumulativ...
How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power
Abstract A common question asked by researchers is, "What sample size do I need for my study?" Over the years, several rules of thumb have been proposed. In reality there is no ...
Publication Info
- Year
- 2014
- Type
- article
- Volume
- 143
- Issue
- 5
- Pages
- 2020-2045
- Citations
- 895
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037/xge0000014
- PMID
- 25111580