Abstract

Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset.

Keywords

OverfittingInitializationComputer scienceScratchArtificial intelligenceRectifier (neural networks)Artificial neural networkContextual image classificationParametric statisticsDeep neural networksMachine learningConvolutional neural networkDeep learningPattern recognition (psychology)Image (mathematics)Recurrent neural networkMathematicsTypes of artificial neural networks

Affiliated Institutions

Related Publications

Network In Network

Abstract: We propose a novel deep network structure called In Network (NIN) to enhance model discriminability for local patches within the receptive field. The conventional con...

2014 arXiv (Cornell University) 1037 citations

Publication Info

Year
2015
Type
article
Pages
1026-1034
Citations
18160
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

18160
OpenAlex

Cite This

Kaiming He, Xiangyu Zhang, Shaoqing Ren et al. (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. , 1026-1034. https://doi.org/10.1109/iccv.2015.123

Identifiers

DOI
10.1109/iccv.2015.123