Abstract
Vector quantization is intrinsically superior to predictive coding, transform coding, and other suboptimal and {\em ad hoc} procedures since it achieves optimal rate distortion performance subject only to a constraint on memory or block length of the observable signal segment being encoded. The key limitation of existing techniques is the very large randomly generated code books which must be stored, and the computational complexity of the associated encoding procedures. The quantization operation is decomposed into its rudimentary structural components. This leads to a simple and elegant approach to derive analytical properties of optimal quantizers. Some useful properties of quantizers and algorithmic approaches are given, which are relevant to the complexity of both storage and processing in the encoding operation. Highly disordered quantizers, which have been designed using a clustering algorithm, are considered. Finally, lattice quantizers are examined which circumvent the need for a code book by using a highly structured code based on lattices. The code vectors are algorithmically generated in a simple manner rather than stored in a code book, and fast algorithms perform the encoding algorithm with negligible complexity.
Keywords
Affiliated Institutions
Related Publications
Product code vector quantizers for waveform and voice coding
Memory and computation requirements imply fundamental limitations on the quality that can be achieved in vector quantization systems used for speech waveform coding and linear p...
An Algorithm for Vector Quantizer Design
An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The bas...
Vector quantization in speech coding
Quantization, the process of approximating continuous-amplitude signals by digital (discrete-amplitude) signals, is an important aspect of data compression or coding, the field ...
Asymptotically optimal block quantization
In 1948 W. R. Bennett used a companding model for nonuniform quantization and proposed the formula <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3...
Self-organisation: a derivation from first principles of a class of learning algorithms
A novel derivation of T. Kohonen's topographic mapping learning algorithm (Self-Organization and Associative Memory, Springer-Verlag, 1984) is presented. Thus the author prescri...
Publication Info
- Year
- 1982
- Type
- article
- Volume
- 28
- Issue
- 2
- Pages
- 157-166
- Citations
- 317
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1109/tit.1982.1056457