Abstract
To date, the preponderance of techniques for eliciting the knowledge embedded in trained artificial neural networks (ANN's) has focused primarily on extracting rule-based explanations from feedforward ANN's. The ADT taxonomy for categorizing such techniques was proposed in 1995 to provide a basis for the systematic comparison of the different approaches. This paper shows that not only is this taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types (e.g., recurrent neural networks) and explanation structures. In addition the paper identifies some of the key research questions in extracting the knowledge embedded within ANN's including the need for the formulation of a consistent theoretical basis for what has been, until recently, a disparate collection of empirical results.
Keywords
Affiliated Institutions
Related Publications
State-of-the-art in artificial neural network applications: A survey
This is a survey of neural network applications in the real-world scenario. It provides a taxonomy of artificial neural networks (ANNs) and furnish the reader with knowledge of ...
Constructive algorithms for structure learning in feedforward neural networks for regression problems
In this survey paper, we review the constructive algorithms for structure learning in feedforward neural networks for regression problems. The basic idea is to start with a smal...
Neural Networks and the Bias/Variance Dilemma
Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relati...
Power curves for pattern classification networks
The authors discuss the development of a methodology for evaluating and predicting the goodness of a pattern classification neural network based on the statistical concept of po...
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical al...
Publication Info
- Year
- 1998
- Type
- article
- Volume
- 9
- Issue
- 6
- Pages
- 1057-1068
- Citations
- 438
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1109/72.728352
- PMID
- 18255792