Abstract
Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time--e.g., relations among notes within a musical phrase--but not structure that occurs over longer time periods--e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard back propagation.
Keywords
Affiliated Institutions
Related Publications
Finding Structure in Time
Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implici...
Artificial neural networks and their application to sequence recognition
This thesis studies the introduction of a priori structure into the design of learning systems based on artificial neural networks applied to sequence recognition, in particular...
Learning and Problem Solving with Multilayer Connectionist Systems
The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the is...
Improving deep neural networks for LVCSR using rectified linear units and dropout
Recently, pre-trained deep neural networks (DNNs) have outperformed traditional acoustic models based on Gaussian mixture models (GMMs) on a variety of large vocabulary speech r...
Review Lecture The perception of music
A common but none the less remarkable human faculty is the ability to recognize and reproduce familiar pieces of music. No two performances of a given piece will ever be acousti...
Publication Info
- Year
- 1991
- Type
- article
- Volume
- 4
- Pages
- 275-282
- Citations
- 132
- Access
- Closed