Abstract

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

Keywords

Computer scienceArchitectureConvolutional neural networkIntuitionArtificial intelligenceDeep learningNetwork architectureContext (archaeology)Artificial neural networkScale (ratio)Pattern recognition (psychology)Machine learningComputer network

Affiliated Institutions

Related Publications

Publication Info

Year
2015
Type
article
Pages
1-9
Citations
45596
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

45596
OpenAlex

Cite This

Christian Szegedy, Wei Liu, Yangqing Jia et al. (2015). Going deeper with convolutions. , 1-9. https://doi.org/10.1109/cvpr.2015.7298594

Identifiers

DOI
10.1109/cvpr.2015.7298594