Abstract

Resampling methods are commonly used for dealing with the class‐imbalance problem. Their advantage over other methods is that they are external and thus, easily transportable. Although such approaches can be very simple to implement, tuning them most effectively is not an easy task. In particular, it is unclear whether oversampling is more effective than undersampling and which oversampling or undersampling rate should be used. This paper presents an experimental study of these questions and concludes that combining different expressions of the resampling approach is an effective solution to the tuning problem. The proposed combination scheme is evaluated on imbalanced subsets of the Reuters‐21578 text collection and is shown to be quite effective for these problems.

Keywords

UndersamplingOversamplingResamplingComputer scienceMachine learningArtificial intelligenceClass (philosophy)Scheme (mathematics)Task (project management)Data miningPattern recognition (psychology)MathematicsBandwidth (computing)Engineering

Affiliated Institutions

Related Publications

Publication Info

Year
2004
Type
article
Volume
20
Issue
1
Pages
18-36
Citations
1006
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1006
OpenAlex

Cite This

Andrew Estabrooks, Taeho Jo, Nathalie Japkowicz (2004). A Multiple Resampling Method for Learning from Imbalanced Data Sets. Computational Intelligence , 20 (1) , 18-36. https://doi.org/10.1111/j.0824-7935.2004.t01-1-00228.x

Identifiers

DOI
10.1111/j.0824-7935.2004.t01-1-00228.x