Abstract

This paper describes NETtalk, a class of massively-parallel network systems that learn to convert English text to speech. The memory representations for pronunciations are learned by practice and are shared among many processing units. The performance of NETtalk has some similarities with observed human performance. (i) The learning follows a power law. (;i) The more words the network learns, the better it is at generalizing and correctly pronouncing new words, (iii) The performance of the network degrades very slowly as connections in the network are damaged: no single link or processing unit is essential. (iv) Relearning after damage is much faster than learning during the original training. (v) Distributed or spaced practice is more effective for long-term retention than massed practice. Network models can be constructed that have the same performance and learning characteristics on a particular task, but differ completely at the levels of synaptic strengths and single-unit responses. However, hierarchical clustering techniques applied to NETtalk reveal that these different networks have similar internal representations of letter-to-sound correspondences within groups of processing units. This suggests that invariant internal representations may be found in assemblies of neurons intermediate in size between highly localized and completely distributed representations.

Keywords

Computer scienceTask (project management)Artificial intelligenceCluster analysisClass (philosophy)Natural language processing

Related Publications

Finding Structure in Time

Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implici...

1990 Cognitive Science 10427 citations

DeepWalk

We present DeepWalk, a novel approach for learning latent representations of\nvertices in a network. These latent representations encode social relations in\na continuous vector...

2014 8168 citations

Long Short-Term Memory

Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We brief...

1997 Neural Computation 90535 citations

Publication Info

Year
1987
Type
article
Volume
1
Pages
145-168
Citations
1556
Access
Closed

External Links

Citation Metrics

1556
OpenAlex

Cite This

Terrence J. Sejnowski (1987). Parallel networks that learn to pronounce English text. , 1 , 145-168.