Abstract

We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classifiers that utilize small numbers of features yet exhibit good generalization performance. Perhaps most importantly, our algorithm generates concept descriptions that are easy for humans to understand. 1 Introduction Multi-layer perceptron (MLP) learning is a powerful method for tasks such as concept classification. However, in many applications, such as those that may involve scientific discovery, it is crucial to be able to explain predictions. Multi-layer perceptrons are limited in this regard, since their representations are notoriously difficult for humans to understand. We present an approach to learning ...

Keywords

PerceptronComputer scienceBoosting (machine learning)Artificial intelligenceGeneralizationMachine learningClass (philosophy)Set (abstract data type)AlgorithmPattern recognition (psychology)Artificial neural networkMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
1995
Type
article
Volume
8
Pages
654-660
Citations
27
Access
Closed

External Links

Citation Metrics

27
OpenAlex

Cite This

Jeffrey C. Jackson, Mark Craven (1995). Learning Sparse Perceptrons. , 8 , 654-660.