RGBD-Net: Predicting Color and Depth images for Novel Views Synthesis

Authors:

Phong Ha Nguyen, Animesh Karnewar, Lam Huynh, Esa Rahtu, Jiri Matas and Janne Heikkila

Abstract:

We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator network. The former one predicts depth maps of the target views by using adaptive depth scaling, while the latter one leverages the predicted depths and renders spatially and temporally consistent target images. In the experimental evaluation on standard datasets, RGBD-Net not only outperforms the state-of-the-art by a clear margin, but it also generalizes well to new scenes without per-scene optimisation. Moreover, we show that RGBD-Net can be optionally trained without depth supervision while still retaining high-quality rendering. Thanks to the depth regression network, RGBD-Net can be also used for creating dense 3D point clouds that are more accurate than those produced by some state-of-the-art multi-view stereo methods.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors