Abstract

Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in their building modules. In this work, we introduce two new modules to enhance the transformation modeling capability of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from the target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the performance of our approach. For the first time, we show that learning dense spatial transformation in deep CNNs is effective for sophisticated vision tasks such as object detection and semantic segmentation. The code is released at https://github.com/msracver/Deformable-ConvNets.

Keywords

Computer sciencePoolingArtificial intelligenceConvolutional neural networkConvolution (computer science)Transformation (genetics)SegmentationCode (set theory)Deep learningComputer visionPattern recognition (psychology)Object detectionGeometric transformationArtificial neural networkImage (mathematics)

Affiliated Institutions

Related Publications

Publication Info

Year
2017
Type
article
Pages
764-773
Citations
6444
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

6444
OpenAlex

Cite This

Jifeng Dai, Haozhi Qi, Yuwen Xiong et al. (2017). Deformable Convolutional Networks. , 764-773. https://doi.org/10.1109/iccv.2017.89

Identifiers

DOI
10.1109/iccv.2017.89