Abstract

Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves: the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic/semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands: indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context‐dependent, while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type/token distinction.

Keywords

ConnectionismComputer scienceSet (abstract data type)Task (project management)Context (archaeology)Representation (politics)Artificial intelligenceNatural language processingSemantic memorySecurity tokenCognitive scienceCognitionArtificial neural networkPsychology

Affiliated Institutions

Related Publications

Publication Info

Year
1990
Type
article
Volume
14
Issue
2
Pages
179-211
Citations
10427
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

10427
OpenAlex

Cite This

Jeffrey L. Elman (1990). Finding Structure in Time. Cognitive Science , 14 (2) , 179-211. https://doi.org/10.1207/s15516709cog1402_1

Identifiers

DOI
10.1207/s15516709cog1402_1