Abstract

Representations for semantic information about words are necessary for many applications of neural networks in natural language processing. This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical coccurrence statistics by means of a large-scale linear regression. The representations are successfully applied to word sense disambiguation using a nearest neighbor method.

Keywords

Computer scienceNatural language processingArtificial intelligenceWord (group theory)Word-sense disambiguationSemantic compressionSemantic spaceSemantics (computer science)Natural languageArtificial neural networkSemEvalSpace (punctuation)Semantic computingWordNetLinguisticsSemantic Web

Affiliated Institutions

Related Publications

Finding Structure in Time

Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implici...

1990 Cognitive Science 10427 citations

Publication Info

Year
1992
Type
article
Volume
5
Pages
895-902
Citations
212
Access
Closed

External Links

Citation Metrics

212
OpenAlex

Cite This

Hinrich Schütze (1992). Word Space. Neural Information Processing Systems , 5 , 895-902.