Abstract

High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

Keywords

AutoencoderCurse of dimensionalityInitializationGradient descentArtificial neural networkComputer sciencePrincipal component analysisArtificial intelligencePattern recognition (psychology)Layer (electronics)High dimensionalPrincipal (computer security)AlgorithmMaterials scienceNanotechnology

Affiliated Institutions

Related Publications

Publication Info

Year
2006
Type
article
Volume
313
Issue
5786
Pages
504-507
Citations
20153
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

20153
OpenAlex

Cite This

Geoffrey E. Hinton, Ruslan Salakhutdinov (2006). Reducing the Dimensionality of Data with Neural Networks. Science , 313 (5786) , 504-507. https://doi.org/10.1126/science.1127647

Identifiers

DOI
10.1126/science.1127647