Learning Scale-Adaptive Representations for Point-Level LiDAR Semantic Segmentation |
---|
Authors: Tongfeng Zhang, Kaizhi Yang and Xuejin Chen |
Abstract: The massive objects with various scales and categories in autonomous driving scenes pose a great challenge to the LiDAR semantic segmentation task. Although the voxel-based 3d convolutional networks employed by existing state-of-the-art methods can extract features with different spatial scales, they cannot conduct effective discrimination and combination on them. In this paper, we propose a Scale-Adaptive Fusion (SAF) module that can progressively and selectively fuse features with different receptive fields to help the network deal with scale variations across objects adaptively. In addition, we propose a novel Local Point Refinement (LPR) module to address the quantization loss problem of voxel-based methods. It could take the geometric structure of original point cloud into account by converting voxel-wise feature to the point-wise one. Our proposed method is evaluated on three public datasets, i.e., SemanticKITTI, SemanticPOSS and nuScenes dataset and achieves competitive performance. |
PDF (protected) |