Abstract

The computation and memory required for kernel machines with N training samples is at least O(N 2). Such a complexity is significant even for moderate size problems and is prohibitive for large datasets. We present an approximation technique based on the improved fast Gauss transform to reduce the computation to O(N). We also give an error bound for the approximation, and provide experimental results on the UCI datasets. 1

Keywords

ComputationGaussKernel (algebra)Computer scienceAlgorithmApproximation errorArtificial intelligenceMathematicsDiscrete mathematics

Affiliated Institutions

Related Publications

Compressed sensing

Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible ...

2004 17126 citations

Compressed sensing

Suppose x is an unknown vector in Ropf <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">m</sup> (a digital image or signal); we pla...

2006 IEEE Transactions on Information Theory 22524 citations

Publication Info

Year
2004
Type
article
Volume
17
Pages
1561-1568
Citations
128
Access
Closed

External Links

Citation Metrics

128
OpenAlex

Cite This

Changjiang Yang, Ramani Duraiswami, Larry S. Davis (2004). Efficient Kernel Machines Using the Improved Fast Gauss Transform. , 17 , 1561-1568.