Abstract

Abstract Vector‐based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector‐based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.

Keywords

Natural language processingComputer scienceDistributional semanticsOperationalizationPhraseArtificial intelligenceMeaning (existential)Word (group theory)Semantic similaritySimilarity (geometry)Semantics (computer science)Priming (agriculture)Multiplicative functionComposition (language)LinguisticsMathematicsPsychology

Affiliated Institutions

Related Publications

Publication Info

Year
2010
Type
article
Volume
34
Issue
8
Pages
1388-1429
Citations
967
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

967
OpenAlex

Cite This

Jeff Mitchell, Mirella Lapata (2010). Composition in Distributional Models of Semantics. Cognitive Science , 34 (8) , 1388-1429. https://doi.org/10.1111/j.1551-6709.2010.01106.x

Identifiers

DOI
10.1111/j.1551-6709.2010.01106.x