Learning Scale-Adaptive Representations for Point-Level LiDAR Semantic Segmentation

Authors:

Tongfeng Zhang, Kaizhi Yang and Xuejin Chen

Abstract:

The massive objects with various scales and categories in autonomous driving scenes pose a great challenge to the LiDAR semantic segmentation task. Although the voxel-based 3d convolutional networks employed by existing state-of-the-art methods can extract features with different spatial scales, they cannot conduct effective discrimination and combination on them. In this paper, we propose a Scale-Adaptive Fusion (SAF) module that can progressively and selectively fuse features with different receptive fields to help the network deal with scale variations across objects adaptively. In addition, we propose a novel Local Point Refinement (LPR) module to address the quantization loss problem of voxel-based methods. It could take the geometric structure of original point cloud into account by converting voxel-wise feature to the point-wise one. Our proposed method is evaluated on three public datasets, i.e., SemanticKITTI, SemanticPOSS and nuScenes dataset and achieves competitive performance.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors