Abstract

Various techniques of optimizing criterion functions to train neural-net classifiers are investigated. These techniques include three standard deterministic techniques (variable metric, conjugate gradient, and steepest descent), and a new stochastic technique. It is found that the stochastic technique is preferable on problems with large training sets and that the convergence rates of the variable metric and conjugate gradient techniques are similar.

Keywords

Computer scienceArtificial neural networkArtificial intelligenceTraining (meteorology)Machine learning

Affiliated Institutions

Related Publications

Publication Info

Year
1992
Type
article
Volume
3
Issue
2
Pages
232-240
Citations
210
Access
Closed

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

210
OpenAlex
8
Influential
166
CrossRef

Cite This

Etienne Barnard (1992). Optimization for training neural nets. IEEE Transactions on Neural Networks , 3 (2) , 232-240. https://doi.org/10.1109/72.125864

Identifiers

DOI
10.1109/72.125864
PMID
18276424

Data Quality

Data completeness: 77%