Abstract
J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all
Keywords
Affiliated Institutions
Related Publications
Interrater reliability: the kappa statistic.
The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data colle...
Observer Reliability and Agreement
Abstract The terms observer reliability and observer agreement represent different concepts. Reliability coefficients express the ability to differentiate between subjects. Agre...
Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit.
A previously described coefficient of agreement for nominal scales, kappa, treats all disagreements equally. A generalization to weighted kappa (Kw) is presented. The Kw provide...
MOMENTS OF THE STATISTICS KAPPA AND WEIGHTED KAPPA
This paper considers the mean and variance of the two statistics, kappa and weighted kappa, which are useful in measuring agreement between two raters, in the situation where th...
Reliability of Psychiatric Diagnosis
In a study of interrater diagnostic reliability, 101 psychiatric inpatients were independently interviewed by physicians using a structured interview. Newly admitted patients we...
Publication Info
- Year
- 1980
- Type
- article
- Volume
- 88
- Issue
- 2
- Pages
- 322-328
- Citations
- 510
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037/0033-2909.88.2.322