Abstract
We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classifiers that utilize small numbers of features yet exhibit good generalization performance. Perhaps most importantly, our algorithm generates concept descriptions that are easy for humans to understand. 1 Introduction Multi-layer perceptron (MLP) learning is a powerful method for tasks such as concept classification. However, in many applications, such as those that may involve scientific discovery, it is crucial to be able to explain predictions. Multi-layer perceptrons are limited in this regard, since their representations are notoriously difficult for humans to understand. We present an approach to learning ...
Keywords
Affiliated Institutions
Related Publications
Boosting attribute and phone estimation accuracies with deep neural networks for detection-based speech recognition
Generation of high-precision sub-phonetic attribute (also known as phonological features) and phone lattices is a key frontend component for detection-based bottom-up speech rec...
Revisiting Recurrent Neural Networks for robust ASR
In this paper, we show how new training principles and optimization techniques for neural networks can be used for different network structures. In particular, we revisit the Re...
Distributed Learning, Communication Complexity and Privacy
We consider the problem of PAC-learning from distributed data and analyze fundamental communication complexity questions involved. We provide general upper and lower bounds on t...
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a...
A training algorithm for optimal margin classifiers
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classif...
Publication Info
- Year
- 1995
- Type
- article
- Volume
- 8
- Pages
- 654-660
- Citations
- 27
- Access
- Closed