Abstract
At least a dozen indexes have been proposed for measuring agreement between two judges on a categorical scale. Using the binary (positive-negative) case as a model, this paper presents and critically evaluates some of these proposed measures. The importance of correcting for chance-expected agreement is emphasized, and identities with intraclass correlation coefficients are pointed out.
Keywords
Related Publications
A Generalized Family of Coefficients of Relational Agreement for Numerical Scales
A family of coefficients of relational agreement for numerical scales is proposed. The theory is a generalization to multiple judges of the Zegers and ten Berge theory of associ...
Generalizability of behavioral observations: a clarification of interobserver agreement and interobserver reliability.
Sixteen indices of interobserver agreement and six methods for estimating coefficients of interobserver reliability were critiqued. The agreement statistics were found to be imp...
On various intraclass correlation reliability coefficients.
Bartko (1966, 1974) has presented some analysis of variance (ANOVA) intraclass correlation reliability coefficients that avoid some serious deficiencies not uncommonly found in ...
Evaluation of the Kaiser Physical Activity Survey in women
The Kaiser Physical Activity Survey (KPAS) was evaluated for test-retest reliability and comparison with direct and indirect measures of physical activity (PA) in 50 women (ages...
A generalized concordance correlation coefficient for continuous and categorical data
Abstract This paper discusses a generalized version of the concordance correlation coefficient for agreement data. The concordance correlation coefficient evaluates the accuracy...
Publication Info
- Year
- 1975
- Type
- article
- Volume
- 31
- Issue
- 3
- Pages
- 651-651
- Citations
- 504
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.2307/2529549