Abstract
The computation and memory required for kernel machines with N training samples is at least O(N 2). Such a complexity is significant even for moderate size problems and is prohibitive for large datasets. We present an approximation technique based on the improved fast Gauss transform to reduce the computation to O(N). We also give an error bound for the approximation, and provide experimental results on the UCI datasets. 1
Keywords
Affiliated Institutions
Related Publications
Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements
This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (rand...
Improved time bounds for near-optimal sparse Fourier representations
•We study the problem of finding a Fourier representation <b>R </b>of <i>m</i> terms for a given discrete signal <b>A</b> of length<i> N</i>. The Fast Fourier Transform (FFT) ca...
Uncertainty principles and ideal atomic decomposition
Suppose a discrete-time signal S(t), 0/spl les/t<N, is a superposition of atoms taken from a combined time-frequency dictionary made of spike sequences 1/sub {t=/spl tau/}/ and ...
Compressed sensing
Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible ...
Compressed sensing
Suppose x is an unknown vector in Ropf <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</sup> (a digital image or signal); we pla...
Publication Info
- Year
- 2004
- Type
- article
- Volume
- 17
- Pages
- 1561-1568
- Citations
- 128
- Access
- Closed