3D-MetaConNet: Meta-learning for 3D Shape Classification and Segmentation


Hao Huang, Xiang Li, Lingjing Wang and Yi Fang


Supervised learning on 3D shapes are extensively studied by prior literature, among which PointNet and its variants PointNet++ are representatives. However, these methods tackle 3D shape learning problems by training from scratch using a fixed learning algorithm over large amounts of labeled data, potentially challenged by data and computation bottlenecks. In the paper, we design a novel model, under the framework of meta-learning, to learn 3D shape representation. By training over multiple 3D tasks, each of which is defined as a supervised learning problem, our method can fast adapt to unseen tasks containing limited labeled data. Specifically, our model consists of a extbf{3D-meta-learner} and a task-oriented extbf{3D-learner}, where the 3D-meta-learner produces parameter initialization for the 3D-learner after being trained over different tasks. With adaptively initialized parameters, the 3D-learner can be tuned rapidly in a few steps to achieve good performance on novel tasks with a few amount of training data. To further facilitate discriminative shape feature learning, we introduce a novel task-aware feature adaptation module under a contrastive learning scheme, in which all shapes in each task are considered as a whole and task-oriented compact features are learned. Therefore, we dub our model as extbf{3D-MetaConNet}. Experiments on three public 3D datasets for few-shot shape classification and segmentation demonstrate that our method can learn compact and discriminative 3D shape features efficiently and robustly in a fast adaptation manner. Our method particularly outperforms the methods without a meta-learning framework and is also superior to existing meta-learning approaches.

PDF (protected)

  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021