Conflicts between Likelihood and Knowledge Distillation in Task Incremental Learning for 3D Object Detection

Authors:

Peng YUN, Jun CEN and Ming Liu

Abstract:

In autonomous driving scenarios, edge cases require perception algorithms, like 3D object detection, to incrementally learn new data during a long term. To achieve it, previous methods seek help from knowledge distillation and recursively transfer knowledge from old models to new models. However, conflicts exist between the likelihood term and the distillation regularizer on both old and new knowledge. In this paper, we discuss the drawback of knowledge distillation in the task-incremental-learning scenario for 3D object detection and propose a New-Task-Aware Biased Sampling and Knowledge-distillation-aware Detection Loss to solve the conflicts. Based on the KITTI dataset, we thoroughly evaluate our proposed method from the aspects of both forward and backward transfer in the task-incremental-learning scenario. A great margin of improvement on the whole task sequence (5.6 mAP) demonstrates the effectiveness of our proposed method.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors