Abstract

The authors present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. The authors demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammers improves the training times by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition, there appears to be no loss in generalization performance.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Keywords

A priori and a posterioriGeneralizationComputer scienceArtificial neural networkArtificial intelligenceRecurrent neural networkString (physics)Simple (philosophy)Fraction (chemistry)Machine learningTheoretical computer scienceMathematics

Affiliated Institutions

Related Publications

Neural network ensembles

Several means for improving the performance and training of neural networks for classification are proposed. Crossvalidation is used as a tool for optimizing network parameters ...

1990 IEEE Transactions on Pattern Analysis... 4195 citations

Publication Info

Year
2003
Type
article
Volume
1
Pages
13-22
Citations
28
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

28
OpenAlex

Cite This

C. Lee Giles, Christian W. Omlin (2003). Inserting rules into recurrent neural networks. , 1 , 13-22. https://doi.org/10.1109/nnsp.1992.253712

Identifiers

DOI
10.1109/nnsp.1992.253712