Abstract

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

Keywords

Cardinality (data modeling)Set (abstract data type)Computer scienceDimension (graph theory)Simple (philosophy)Block (permutation group theory)Artificial neural networkResidualCode (set theory)ArchitectureTheoretical computer scienceHomogeneousArtificial intelligenceMachine learningAlgorithmData miningMathematicsProgramming languageCombinatorics

Affiliated Institutions

Related Publications

Publication Info

Year
2017
Type
article
Citations
11353
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

11353
OpenAlex
1338
Influential
8100
CrossRef

Cite This

Saining Xie, Ross Girshick, Piotr Dollár et al. (2017). Aggregated Residual Transformations for Deep Neural Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) . https://doi.org/10.1109/cvpr.2017.634

Identifiers

DOI
10.1109/cvpr.2017.634
arXiv
1611.05431

Data Quality

Data completeness: 88%