Abstract
In psychology, attempts to replicate published findings are less successful than expected. For properly powered studies replication rate should be around 80%, whereas in practice less than 40% of the studies selected from different areas of psychology can be replicated. Researchers in cognitive psychology are hindered in estimating the power of their studies, because the designs they use present a sample of stimulus materials to a sample of participants, a situation not covered by most power formulas. To remedy the situation, we review the literature related to the topic and introduce recent software packages, which we apply to the data of two masked priming studies with high power. We checked how we could estimate the power of each study and how much they could be reduced to remain powerful enough. On the basis of this analysis, we recommend that a properly powered reaction time experiment with repeated measures has at least 1,600 word observations per condition (e.g., 40 participants, 40 stimuli). This is considerably more than current practice. We also show that researchers must include the number of observations in meta-analyses because the effect sizes currently reported depend on the number of stimuli presented to the participants. Our analyses can easily be applied to new datasets gathered.
Keywords
Affiliated Institutions
Related Publications
Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The convention...
An Agenda for Purely Confirmatory Research
The veracity of substantive research claims hinges on the way experimental data are collected and analyzed. In this article, we discuss an uncomfortable fact that threatens the ...
Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis
Preface. About this Book 1. Introduction to The New Statistics 2. From Null Hypothesis Significance Testing to Effect Sizes 3. Confidence Intervals 4. Confidence Intervals, Erro...
Investigating Variation in Replicability
Although replication is a central tenet of science, direct replications are rare in psychology. This research tested variation in the replicability of 13 classic and contemporar...
Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015
Being able to replicate scientific findings is crucial for scientific progress<sup>1-15</sup>. We replicate 21 systematically selected experimental studies in the social science...
Publication Info
- Year
- 2018
- Type
- article
- Volume
- 1
- Issue
- 1
- Pages
- 9-9
- Citations
- 1241
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.5334/joc.10