Abstract

Analysis of text from open-ended interviews has become an important research tool in numerous fields, including business, education, and health research. Coding is an essential part of such analysis, but questions of quality control in the coding process have generally received little attention. This article examines the text coding process applied to three HIV-related studies conducted with the Centers for Disease Control and Prevention considering populations in the United States and Zimbabwe. Based on experience coding data from these studies, we conclude that (1) a team of coders will initially produce very different codings, but (2) it is possible, through a process of codebook revision and recoding, to establish strong levels of intercoder reliability (e.g., most codes with kappa 0.8). Furthermore, steps can be taken to improve initially poor intercoder reliability and to reduce the number of iterations required to generate stronger intercoder reliability.

Keywords

CodebookCoding (social sciences)Reliability (semiconductor)PsychologyComputer scienceHuman immunodeficiency virus (HIV)Social psychologyApplied psychologyCognitive psychologyArtificial intelligenceMedicineStatisticsMathematics

Affiliated Institutions

Related Publications

The Content Analysis Guidebook

List of Boxes List of Tables and Figures Foreword Acknowledgments 1. Defining Content Analysis Is Content Analysis Easy? Is It Something That Anyone Can Do? A Six-Part Definitio...

2017 9164 citations

Publication Info

Year
2004
Type
article
Volume
16
Issue
3
Pages
307-331
Citations
793
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

793
OpenAlex

Cite This

Daniel J. Hruschka, Deborah A. Schwartz, Daphne John et al. (2004). Reliability in Coding Open-Ended Data: Lessons Learned from HIV Behavioral Research. Field Methods , 16 (3) , 307-331. https://doi.org/10.1177/1525822x04266540

Identifiers

DOI
10.1177/1525822x04266540