Abstract

We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the high-quality 3D proposals generated by the voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins.

Keywords

Computer scienceArtificial intelligencePoint cloudAbstractionVoxelPattern recognition (psychology)PoolingObject detectionDiscriminative modelConvolutional neural networkFeature (linguistics)Computer visionFeature extractionGridSet (abstract data type)ENCODEMathematics

Affiliated Institutions

Related Publications

Publication Info

Year
2020
Type
article
Pages
10526-10535
Citations
1878
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1878
OpenAlex
326
Influential
1712
CrossRef

Cite This

Shaoshuai Shi, Chaoxu Guo, Li Jiang et al. (2020). PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , 10526-10535. https://doi.org/10.1109/cvpr42600.2020.01054

Identifiers

DOI
10.1109/cvpr42600.2020.01054
arXiv
1912.13192

Data Quality

Data completeness: 84%