Abstract

In an earlier paper, we introduced a new "boosting" algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a "pseudo-loss" which is a method for forcing a learning algorithm of multi-label conceptsto concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's "bagging" method when used to aggregate various classifiers (including decision trees and single attributevalue tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting...

Keywords

Boosting (machine learning)AdaBoostComputer scienceArtificial intelligenceMachine learningEnsemble learningGradient boostingAlgorithmClassifier (UML)Decision treeRandom forestk-nearest neighbors algorithmTraining setPattern recognition (psychology)

Affiliated Institutions

Related Publications

Bagging, boosting, and C4.S

Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that ar...

1996 National Conference on Artificial Int... 1262 citations

Publication Info

Year
1996
Type
article
Pages
148-156
Citations
7561
Access
Closed

External Links

Citation Metrics

7561
OpenAlex

Cite This

Yoav Freund, Robert E. Schapire (1996). Experiments with a new boosting algorithm. , 148-156.