Abstract

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.

Keywords

Normalization (sociology)InitializationComputer scienceArtificial intelligenceMargin (machine learning)Artificial neural networkCovariateTraining (meteorology)Deep neural networksWord error rateDeep learningTraining setMachine learningPattern recognition (psychology)

Affiliated Institutions

Related Publications

Network In Network

Abstract: We propose a novel deep network structure called In Network (NIN) to enhance model discriminability for local patches within the receptive field. The conventional con...

2014 arXiv (Cornell University) 1037 citations

Publication Info

Year
2024
Type
preprint
Citations
15635
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

15635
OpenAlex

Cite This

Sergey Ioffe, Christian Szegedy (2024). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv (Cornell University) . https://doi.org/10.57702/o9raffed

Identifiers

DOI
10.57702/o9raffed