Mix3D: Out-of-Context Data Augmentation for 3D Scenes

Authors:

Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe and Francis Engelmann

Abstract:

We present Mix3D, a data augmentation technique for segmenting large-scale 3D scenes that is robust towards strong scene priors. As scene context helps reasoning about object semantics, current works focus on models with large capacity and receptive fields that can fully capture the global context of an input 3D scene. In this work, we focus on the importance of balancing global scene context and local object geometry, with the goal of avoiding overfitting to contextual priors in the training set. To this end, we propose Mix3D, which creates new training samples by mixing 3D scenes. By doing so, it implicitly places object instances into novel out-of-context environments. We perform detailed analysis to understand the importance of global context, local geometry and the effect of mixing scenes. In experiments, we show that models trained with Mix3D profit from a significant performance boosts on indoor (ScanNet, S3DIS) and outdoor datasets (SemanticKITTI). Trained with Mix3D, MinkowskiNet outperforms all prior state-of-the-art methods by a significant margin on the ScanNet test benchmark (78.1% mIoU).

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors