Abstract

Self-organizing maps have a bearing on traditional vector quantization. A characteristic that makes them more closely resemble certain biological brain maps, however, is the spatial order of their responses, which is formed in the learning process. A discussion is presented of the basic algorithms and two innovations: dynamic weighting of the input signals at each input of each cell, which improves the ordering when very different input signals are used, and definition of neighborhoods in the learning algorithm by the minimal spanning tree, which provides a far better and faster approximation of prominently structured density functions. It is cautioned that if the maps are used for pattern recognition and decision process, it is necessary to fine tune the reference vectors so that they directly define the decision borders.

Keywords

Computer scienceSelf-organizing mapArtificial intelligencePattern recognition (psychology)Artificial neural network

Affiliated Institutions

Related Publications

Network In Network

Abstract: We propose a novel deep network structure called In Network (NIN) to enhance model discriminability for local patches within the receptive field. The conventional con...

2014 arXiv (Cornell University) 1037 citations

Publication Info

Year
1990
Type
article
Volume
1
Issue
1
Pages
93-99
Citations
330
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

330
OpenAlex
259
CrossRef

Cite This

Jari Kangas, Teuvo Kohonen, Jorma Laaksonen (1990). Variants of self-organizing maps. IEEE Transactions on Neural Networks , 1 (1) , 93-99. https://doi.org/10.1109/72.80208

Identifiers

DOI
10.1109/72.80208
PMID
18282826

Data Quality

Data completeness: 77%