Parameterization of Ambiguity in Monocular Depth Prediction


Patrik Persson, Linn Öström, Carl Olsson and Kalle Åström


Monocular depth estimation is a highly challenging problem that is often addressed with deep neural networks. While these use recognition of high level image features to predict reasonably looking depth maps the result often has poor metric accuracy. Moreover, the standard feed forward architecture does not allow modification of the prediction based on cues other than the image. In this paper we relax the monocular depth estimation task by proposing a network that allows us to complement image features with a set of auxiliary variables. These allow disambiguation when image features are not enough to accurately pinpoint the exact depth map and can be thought of as a low dimensional parameterization of the surfaces that are reasonable monocular predictions. By searching the parameterization we can combine monocular estimation with traditional photoconsistency or geometry based methods to achieve both visually appealing and metrically accurate surface estimations. Since we relax the problem we are able to work with smaller networks than current architectures. In addition we design a self supervised training scheme, eliminating the need for ground truth image-depth-map pairs. Our experimental evaluation shows that our method generates more accurate depth maps and generalizes better than competing state-of-the-art approaches.

PDF (protected)

  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021