Abstract

It has been shown in different studies the benefits of using spatio-temporal information for video face recognition. However, most of the existing spatio-temporal representations do not capture the local discriminative information present in human faces. In this paper we introduce a new local spatio-temporal descriptor, based on structured ordinal features, for video face recognition. The proposed method not only encodes jointly the local spatial and temporal information, but also extracts the most discriminative facial dynamic information while trying to discard spatio-temporal features related to intra-personal variations. Besides, a similarity measure based on a set of background samples is proposed to be used with our descriptor, showing to boost its performance. Extensive experiments conducted on the recent but difficult YouTube Faces database demonstrate the good performance of our proposal, achieving state-of-the-art results.

Keywords

Discriminative modelComputer scienceArtificial intelligencePattern recognition (psychology)Similarity (geometry)Face (sociological concept)Facial recognition systemSimilarity measureSet (abstract data type)Measure (data warehouse)Data miningImage (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
article
Pages
1-6
Citations
53
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

53
OpenAlex

Cite This

Heydi Méndez-Vázquez, Yoanna Martínez-Díaz, Zhenhua Chai (2013). Volume structured ordinal features with background similarity measure for video face recognition. , 1-6. https://doi.org/10.1109/icb.2013.6612990

Identifiers

DOI
10.1109/icb.2013.6612990