Abstract

We present a new type of neural probabilistic language model that learns a mapping from both words and explicit word features into a continuous space that is then used for word prediction. Additionally, we investigate several ways of deriving continuous word representations for unknown words from those of known words. The resulting model significantly reduces perplexity on sparse-data tasks when compared to standard backoff models, standard neural language models, and factored language models.

Keywords

PerplexityLanguage modelComputer scienceWord (group theory)Artificial intelligenceNatural language processingProbabilistic logicArtificial neural networkLinguistics

Affiliated Institutions

Related Publications

Publication Info

Year
2006
Type
article
Pages
1-4
Citations
95
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

95
OpenAlex

Cite This

Andrei T. Alexandrescu, Katrin Kirchhoff (2006). Factored neural language models. , 1-4. https://doi.org/10.3115/1614049.1614050

Identifiers

DOI
10.3115/1614049.1614050