Abstract

Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at https://github.com/Andrew-Qibin/CoordAttention.

Keywords

Computer scienceFeature (linguistics)PoolingEmbeddingAttention networkChannel (broadcasting)Artificial intelligenceOverhead (engineering)Encoding (memory)Pattern recognition (psychology)SegmentationDiscriminative modelComputer visionComputer network

Affiliated Institutions

Related Publications

Squeeze-and-Excitation Networks

Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local ...

2018 2018 IEEE/CVF Conference on Computer ... 25361 citations

Publication Info

Year
2021
Type
article
Pages
13708-13717
Citations
4986
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

4986
OpenAlex
198
Influential
4557
CrossRef

Cite This

Qibin Hou, Daquan Zhou, Jiashi Feng (2021). Coordinate Attention for Efficient Mobile Network Design. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 13708-13717. https://doi.org/10.1109/cvpr46437.2021.01350

Identifiers

DOI
10.1109/cvpr46437.2021.01350
arXiv
2103.02907

Data Quality

Data completeness: 84%