Abstract

The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs). The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This strategy introduces artificial boundaries on the images and may impact the quality of the extracted features. Besides, the operations on the raw image domain require to compute thousands of networks on a single image, which is time-consuming. In this paper, we propose to exploit shape information via masking convolutional features. The proposal segments (e.g., super-pixels) are treated as masks on the convolutional feature maps. The CNN features of segments are directly masked out from these maps and used to train classifiers for recognition. We further propose a joint method to handle objects and "stuff" (e.g., grass, sky, water) in the same framework. State-of-the-art results are demonstrated on benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling computational speed.

Keywords

Computer sciencePascal (unit)Artificial intelligenceConvolutional neural networkSegmentationExploitPattern recognition (psychology)Masking (illustration)Feature (linguistics)Computer visionImage segmentationPixelFeature extractionObject detection

Affiliated Institutions

Related Publications

Publication Info

Year
2015
Type
preprint
Citations
466
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

466
OpenAlex

Cite This

Jifeng Dai, Kaiming He, Jian Sun (2015). Convolutional feature masking for joint object and stuff segmentation. . https://doi.org/10.1109/cvpr.2015.7299025

Identifiers

DOI
10.1109/cvpr.2015.7299025