Abstract

One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1×1 or 3×3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2% (vs 80.2%) on PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.

Keywords

Computer scienceSegmentationArtificial intelligenceKernel (algebra)Pascal (unit)Convolutional neural networkResidualPattern recognition (psychology)Image segmentationField (mathematics)Machine learningAlgorithmMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2017
Type
preprint
Citations
1691
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1691
OpenAlex

Cite This

Chao Peng, Xiangyu Zhang, Gang Yu et al. (2017). Large Kernel Matters — Improve Semantic Segmentation by Global Convolutional Network. . https://doi.org/10.1109/cvpr.2017.189

Identifiers

DOI
10.1109/cvpr.2017.189