Abstract

We present two novel methods for face verification. Our first method - "attribute" classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - "simile" classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.

Keywords

Computer scienceArtificial intelligenceFace (sociological concept)SimileSet (abstract data type)Pattern recognition (psychology)Similarity (geometry)Image (mathematics)Data set

Affiliated Institutions

Related Publications

Publication Info

Year
2009
Type
article
Citations
1366
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1366
OpenAlex

Cite This

Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur et al. (2009). Attribute and simile classifiers for face verification. . https://doi.org/10.1109/iccv.2009.5459250

Identifiers

DOI
10.1109/iccv.2009.5459250