Abstract

We construct data dependent upper bounds on the risk in function learning problems. The bounds are based on local norms of the Rademacher process indexed by the underlying function class, and they do not require prior knowledge about the distribution of training examples or any specific properties of the function class. Using Talagrand's type concentration inequalities for empirical and Rademacher processes, we show that the bounds hold with high probability that decreases exponentially fast when the sample size grows. In typical situations that are frequently encountered in the theory of function learning, the bounds give nearly optimal rate of convergence of the risk to zero.

Keywords

Bounding overwatchFunction (biology)Class (philosophy)MathematicsApplied mathematicsConvergence (economics)Zero (linguistics)Computer scienceArtificial intelligence

Affiliated Institutions

Related Publications

Low-Density Parity-Check Codes

This is a complete presentation of all important theoretical and experimental work done on low-density codes. Low-density coding is one of the three techniques thus far develope...

1963 The MIT Press eBooks 4319 citations

Publication Info

Year
2000
Type
book-chapter
Pages
443-457
Citations
206
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

206
OpenAlex

Cite This

Vladimir Koltchinskii, Dmitriy Panchenko (2000). Rademacher Processes and Bounding the Risk of Function Learning. Birkhäuser Boston eBooks , 443-457. https://doi.org/10.1007/978-1-4612-1358-1_29

Identifiers

DOI
10.1007/978-1-4612-1358-1_29