Abstract

Assessing inter-rater reliability, whereby data are independently coded and the codings compared for agreements, is a recognised process in quantitative research. However, its applicability to qualitative research is less clear: should researchers be expected to identify the same codes or themes in a transcript or should they be expected to produce different accounts? Some qualitative researchers argue that assessing inter-rater reliability is an important method for ensuring rigour, others that it is unimportant; and yet it has never been formally examined in an empirical qualitative study. Accordingly, to explore the degree of inter-rater reliability that might be expected, six researchers were asked to identify themes in the same focus group transcript. The results showed close agreement on the basic themes but each analyst `packaged' the themes differently.

Keywords

Inter-rater reliabilityReliability (semiconductor)Empirical researchSociologyQualitative researchPsychologyEpistemologySocial scienceStatisticsMathematicsRating scaleDevelopmental psychology

Related Publications

Observer Reliability and Agreement

Abstract The terms observer reliability and observer agreement represent different concepts. Reliability coefficients express the ability to differentiate between subjects. Agre...

2005 Encyclopedia of Biostatistics 62 citations

Publication Info

Year
1997
Type
article
Volume
31
Issue
3
Pages
597-606
Citations
944
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

944
OpenAlex

Cite This

David Armstrong, Ann Gosling, John Weinman et al. (1997). The Place of Inter-Rater Reliability in Qualitative Research: An Empirical Study. Sociology , 31 (3) , 597-606. https://doi.org/10.1177/0038038597031003015

Identifiers

DOI
10.1177/0038038597031003015