Abstract

Function estimation/approximation is viewed from the perspective\nof numerical optimization in function space, rather than parameter space. A\nconnection is made between stagewise additive expansions and steepest-descent\nminimization. A general gradient descent “boosting” paradigm is\ndeveloped for additive expansions based on any fitting criterion.Specific\nalgorithms are presented for least-squares, least absolute deviation, and\nHuber-M loss functions for regression, and multiclass logistic likelihood for\nclassification. Special enhancements are derived for the particular case where\nthe individual additive components are regression trees, and tools for\ninterpreting such “TreeBoost” models are presented. Gradient\nboosting of regression trees produces competitive, highly robust, interpretable\nprocedures for both regression and classification, especially appropriate for\nmining less than clean data. Connections between this approach and the boosting\nmethods of Freund and Shapire and Friedman, Hastie and Tibshirani are\ndiscussed.

Keywords

MathematicsGradient boostingBoosting (machine learning)Gradient descentRegressionMathematical optimizationMinificationLogistic regressionApplied mathematicsStatisticsArtificial intelligenceArtificial neural networkComputer scienceRandom forest

Affiliated Institutions

Related Publications

Publication Info

Year
2001
Type
article
Volume
29
Issue
5
Citations
26394
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

26394
OpenAlex

Cite This

Jerome H. Friedman (2001). Greedy function approximation: A gradient boosting machine.. The Annals of Statistics , 29 (5) . https://doi.org/10.1214/aos/1013203451

Identifiers

DOI
10.1214/aos/1013203451