Abstract

One approach to learning classification rules from examples is to build decision trees. A review and comparison paper by Mingers (Mingers, 1989) looked at the first stage of tree building, which uses a "splitting rule" to grow trees with a greedy recursive partitioning algorithm. That paper considered a number of different measures and experimentally examined their behavior on four domains. The main conclusion was that a random splitting rule does not significantly decrease classificational accuracy. This note suggests an alternative experimental method and presents additional results on further domains. Our results indicate that random splitting leads to increased error. These results are at variance with those presented by Mingers.

Keywords

Decision treeVariance (accounting)MathematicsTree (set theory)AlgorithmDecision ruleRandom forestRule inductionComputer scienceArtificial intelligenceMachine learningMathematical optimizationPattern recognition (psychology)StatisticsCombinatorics

Affiliated Institutions

Related Publications

Best-first Decision Tree Learning

Decision trees are potentially powerful predictors and explicitly represent the structure of a dataset. Standard decision tree learners such as C4.5 expand nodes in depth-first ...

2007 Research Commons (University of Waikato) 229 citations

Publication Info

Year
1992
Type
article
Volume
8
Issue
1
Pages
75-85
Citations
157
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

157
OpenAlex

Cite This

Wray Buntine, Tim Niblett (1992). A further comparison of splitting rules for decision-tree induction. Machine Learning , 8 (1) , 75-85. https://doi.org/10.1007/bf00994006

Identifiers

DOI
10.1007/bf00994006