Abstract

We address the problem of determining what size test set guarantees statistically significant results in a character recognition task, as a function of the expected error rate. We provide a statistical analysis showing that if, for example, the expected character error rate is around 1 percent, then, with a test set of at least 10,000 statistically independent handwritten characters (which could be obtained by taking 100 characters from each of 100 different writers), we guarantee, with 95 percent confidence, that: (1) the expected value of the character error rate is not worse than 1.25 E, where E is the empirical character error rate of the best recognizer, calculated on the test set; and (2) a difference of 0.3 E between the error rates of two recognizers is significant. We developed this framework with character recognition applications in mind, but it applies as well to speech recognition and to other pattern recognition problems.

Keywords

Word error rateCharacter (mathematics)Test setSet (abstract data type)Computer sciencePattern recognition (psychology)Speech recognitionCharacter recognitionFunction (biology)Artificial intelligenceTask (project management)Value (mathematics)Test (biology)StatisticsMathematicsMachine learningImage (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
1998
Type
article
Volume
20
Issue
1
Pages
52-64
Citations
152
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

152
OpenAlex

Cite This

Isabelle Guyon, J. Makhoul, Richard Schwartz et al. (1998). What size test set gives good error rate estimates?. IEEE Transactions on Pattern Analysis and Machine Intelligence , 20 (1) , 52-64. https://doi.org/10.1109/34.655649

Identifiers

DOI
10.1109/34.655649