Abstract

We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly used regularization functions, such as ℓ1, mixed norm, and trace-norm. 1

Keywords

Stochastic gradient descentGradient descentRegularization (linguistics)Regular polygonComputer scienceConvex optimizationConvex functionNorm (philosophy)Mathematical optimizationProximal gradient methods for learningOptimization problemProximal Gradient MethodsAlgorithmMathematicsArtificial intelligenceConvex analysisArtificial neural network

Affiliated Institutions

Related Publications

Publication Info

Year
2010
Type
article
Pages
14-26
Citations
252
Access
Closed

External Links

Citation Metrics

252
OpenAlex

Cite This

John C. Duchi, Shai Shalev‐Shwartz, Yoram Singer et al. (2010). Composite objective mirror descent. , 14-26.