Abstract

THE LAST few years have seen a number of studies dealing with the problem of the reliability of psychiatric diagnoses. What makes it difficult to assimilate the various findings is the lack of uniform methods for quantifying the salient features of the data. Thus, one study will report an overall rate of perfect agreement of 54%,<sup>1</sup>while another will report an overall contingency coefficient of 0.714.<sup>2</sup>Still another will report that, given that one diagnostician has made a particular diagnosis, the probability that another diagnostician will make the same diagnosis is 0.57.<sup>3</sup> Furthermore, as generally used, all of these methods suffer from one or more deficiencies which are illustrated using the hypothetical data of Table 1. (1) Chance agreement is not taken into account, or equivalently, the base rates at which the various diagnoses are made are not used to qualify the agreement measure.

Keywords

Medical diagnosisContingency tableAgreementPsychiatric diagnosisReliability (semiconductor)Measure (data warehouse)SalientPsychologyInter-rater reliabilityTable (database)PsychiatryStatisticsComputer scienceMedicineMathematicsArtificial intelligenceData miningPhysicsPathologyRating scalePhilosophySchizophrenia (object-oriented programming)

Related Publications

Publication Info

Year
1967
Type
article
Volume
17
Issue
1
Pages
83-83
Citations
341
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

341
OpenAlex

Cite This

Robert L. Spitzer (1967). Quantification of Agreement in Psychiatric Diagnosis. Archives of General Psychiatry , 17 (1) , 83-83. https://doi.org/10.1001/archpsyc.1967.01730250085012

Identifiers

DOI
10.1001/archpsyc.1967.01730250085012