Abstract

AbstractLow-dimensional representations of sensory signals are key to solving many of the computational problems encountered in high-level vision. Principal component analysis (PCA) has been used in the past to derive practically useful compact representations for different classes of objects. One major objection to the applicability of PCA is that it invariably leads to global, non-topographic representations that are not amenable to further processing and are not biologically plausible. In this paper we present a new mathematical construction, local feature analysis (LFA), for deriving local topographic representations for any class of objects. The LFA representations are sparse-distributed and, hence, are effectively low-dimensional and retain all the advantages of the compact representations of the PCA. But, unlike the global eigenmodes, they give a description of objects in terms of statistically derived local features and their positions. We illustrate the theory by using it to extract local features for three ensembles: 2D images of faces without background, 3D surfaces of human heads, and finally 2D faces on a background. The resulting local representations have powerful applications in head segmentation and face recognition.

Keywords

Representation (politics)Feature (linguistics)Object (grammar)Computer scienceStatistical theoryArtificial intelligencePattern recognition (psychology)MathematicsStatisticsLinguisticsPhilosophy

Affiliated Institutions

Related Publications

Publication Info

Year
1996
Type
article
Volume
7
Issue
3
Pages
477-500
Citations
589
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

589
OpenAlex

Cite This

P.S. Penev, Joseph J. Atick (1996). Local feature analysis: a general statistical theory for object representation. Network Computation in Neural Systems , 7 (3) , 477-500. https://doi.org/10.1088/0954-898x_7_3_002

Identifiers

DOI
10.1088/0954-898x_7_3_002