Abstract

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. The adaptation, in essence, allows us to find needles in haystacks in the form of very predictive yet rarely observed features. Our paradigm stems from recent advances in online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies the task of setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We corroborate our theoretical results with experiments on a text classification task, showing substantial improvements for classification with sparse datasets. 1

Keywords

Subgradient methodRegretComputer scienceRegularization (linguistics)Mathematical optimizationEmpirical risk minimizationArtificial intelligenceHindsight biasStochastic optimizationMachine learningAlgorithmMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2010
Type
article
Volume
12
Issue
61
Pages
257-269
Citations
8609
Access
Closed

External Links

Citation Metrics

8609
OpenAlex

Cite This

John C. Duchi, Elad Hazan, Yoram Singer (2010). Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.. , 12 (61) , 257-269.