Abstract

The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.

Keywords

Computer sciencePrinciple of compositionalitySoftmax functionWord (group theory)Natural language processingArtificial intelligenceSimple (philosophy)Semantics (computer science)Quality (philosophy)SpeedupLinguisticsArtificial neural network

Affiliated Institutions

Related Publications

Publication Info

Year
2013
Type
article
Volume
26
Pages
3111-3119
Citations
18057
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

18057
OpenAlex

Cite This

Tomáš Mikolov, Ilya Sutskever, Kai Chen et al. (2013). Distributed Representations of Words and Phrases and their Compositionality. arXiv (Cornell University) , 26 , 3111-3119. https://doi.org/10.48550/arxiv.1310.4546

Identifiers

DOI
10.48550/arxiv.1310.4546