Abstract

We consider “one-at-a-time” coordinate-wise descent algorithms for\na class of convex optimization problems. An algorithm of this\nkind has been proposed for the L<sub>1</sub>-penalized\nregression (lasso) in the literature, but it seems to have been\nlargely ignored. Indeed, it seems that coordinate-wise\nalgorithms are not often used in convex optimization. We show\nthat this algorithm is very competitive with the well-known LARS\n(or homotopy) procedure in large lasso problems, and that it can\nbe applied to related methods such as the garotte and elastic\nnet. It turns out that coordinate-wise descent does not work in\nthe “fused lasso,” however, so we derive a generalized algorithm\nthat yields the solution in much less time that a standard\nconvex optimizer. Finally, we generalize the procedure to the\ntwo-dimensional fused lasso, and demonstrate its performance on\nsome image smoothing problems.

Keywords

Coordinate descentLasso (programming language)Elastic net regularizationSmoothingRegular polygonMathematicsMathematical optimizationAlgorithmConvex optimizationComputer scienceOptimization problemCoordinate systemArtificial intelligenceFeature selectionStatistics

Affiliated Institutions

Related Publications

Publication Info

Year
2007
Type
article
Volume
1
Issue
2
Citations
1908
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

1908
OpenAlex

Cite This

Jerome H. Friedman, Trevor Hastie, Holger Höfling et al. (2007). Pathwise coordinate optimization. The Annals of Applied Statistics , 1 (2) . https://doi.org/10.1214/07-aoas131

Identifiers

DOI
10.1214/07-aoas131