Abstract

We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between √T and log T. Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms.

Keywords

Gradient descentRegretExtension (predicate logic)A priori and a posterioriConvex functionMathematicsMathematical optimizationRegular polygonSimple (philosophy)Online algorithmConvex optimizationCombinatoricsComputer scienceAlgorithmArtificial intelligenceStatisticsArtificial neural network

Affiliated Institutions

Related Publications

Publication Info

Year
2007
Type
article
Volume
20
Pages
65-72
Citations
188
Access
Closed

External Links

Citation Metrics

188
OpenAlex

Cite This

Elad Hazan, Alexander Rakhlin, Peter L. Bartlett (2007). Adaptive Online Gradient Descent. Neural Information Processing Systems , 20 , 65-72.