Abstract
Preliminary experiments showed that 5s better recall a noun pair if they generate their own linking sentence for the pair than if they merely read an equivalent linking sentence. Initial attempts to explain this effect in terms of memory search activities or idiosyncratic ally high-associative mediators proved unproductive in later experiments reported here. hypothesis was then offered that the generate vs. read conditions differ in comprehension of the sentences and that comprehension aids retention. Subsequent experiments on incidental learning showed that recall is excellent when S is set to process a sentence in different ways designed to promote comprehension of its meaning, whereas equivalent exposure to or mouthing of the words in control sentences without comprehension produces relatively little recall. following experiments are concerned with the facilitation of paired-associate learning produced by embedding each word pair in a sentence. Rohwer (1966) found that an 5 who hears a linking sentence such as The COW chased the will recall the COW-BALL pair better than a control 5 who simply studied the pair without a sentence context. In repeating some of Rohwer's paradigms, another phenomenon was uncovered which led into the present experimental series. phenomenon is that 5s better remembered noun pairs embedded in sentences they generated than they did pairs embedded in sentences E gave them. At the time of study or input, 5s in the read condition read aloud a presented sentence (e.g., COW chased the BALL), whereas those in the generation condition saw the pair COW-BALL and had to make up and say aloud a linking sentence. Although input times were controlled, later recall (of BALL when cued with COW) was about 25-30% higher in the generation condition. This result is quite reliable, having been replicated several times in the experiments reported subsequently. Why does generating a linking sentence
Keywords
Affiliated Institutions
Related Publications
Velvet: Algorithms for de novo short read assembly using de Bruijn graphs
We have developed a new set of algorithms, collectively called “Velvet,” to manipulate de Bruijn graphs for genomic sequence assembly. A de Bruijn graph is a compact representat...
Error filtering, pair assembly and error correction for next-generation sequencing reads
Abstract Motivation: Next-generation sequencing produces vast amounts of data with errors that are difficult to distinguish from true biological variation when coverage is low. ...
Evaluating the Effectiveness of Large Language Models in Representing Textual Descriptions of Geometry and Spatial Relations (Short Paper)
This research focuses on assessing the ability of large language models (LLMs) in representing geometries and their spatial relations. We utilize LLMs including GPT-2 and BERT t...
ALBERT: A Lite BERT for Self-supervised Learning of Language\n Representations
Increasing model size when pretraining natural language representations often\nresults in improved performance on downstream tasks. However, at some point\nfurther model increas...
THE ANOMALOUS BEHAVIOUR OF PRECISION IN THE SWETS MODEL, AND ITS RESOLUTION
M. H. Heine has shown that if one follows the retrieval procedure associated with Swets' model of an information retrieval system it is possible that the inverse relationship be...
Publication Info
- Year
- 1969
- Type
- article
- Volume
- 80
- Issue
- 3, Pt.1
- Pages
- 455-461
- Citations
- 218
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037/h0027461