Abstract

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).

Keywords

K-SVDPrior probabilityComputer sciencePattern recognition (psychology)Neural codingBayesian probabilityAlgorithmGeneralizationSparse approximationArtificial intelligenceMathematics

Affiliated Institutions

Related Publications

Decoding by Linear Programming

This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e....

2005 IEEE Transactions on Information Theory 7166 citations

Publication Info

Year
2003
Type
article
Volume
15
Issue
2
Pages
349-396
Citations
825
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

825
OpenAlex

Cite This

Kenneth Kreutz-Delgado, Joseph F. Murray, Bhaskar D. Rao et al. (2003). Dictionary Learning Algorithms for Sparse Representation. Neural Computation , 15 (2) , 349-396. https://doi.org/10.1162/089976603762552951

Identifiers

DOI
10.1162/089976603762552951