Abstract

Adaptive optimization algorithms, such as Adam and RMSprop, have witnessed better optimization performance than stochastic gradient descent (SGD) in some scenarios. However, recent studies show that they often lead to worse generalization performance than SGD, especially for training deep neural networks (DNNs). In this work, we identify the reasons that Adam generalizes worse than SGD, and develop a variant of Adam to eliminate the generalization gap. The proposed method, normalized direction-preserving Adam (ND-Adam), enables more precise control of the direction and step size for updating weight vectors, leading to significantly improved generalization performance. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also hope to shed light on why certain optimization algorithms generalize better than others.

Keywords

Stochastic gradient descentGeneralizationSoftmax functionComputer scienceArtificial intelligenceDeep neural networksArtificial neural networkGradient descentBridging (networking)Deep learningMachine learningMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2018
Type
article
Pages
1-2
Citations
1244
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1244
OpenAlex

Cite This

Zijun Zhang (2018). Improved Adam Optimizer for Deep Neural Networks. , 1-2. https://doi.org/10.1109/iwqos.2018.8624183

Identifiers

DOI
10.1109/iwqos.2018.8624183