Residual Geometric Feature Transform Network for 3D Surface Super-Resolution


Maolin Cui, Wuyuan Xie, Miaohui Wang and Tengcong Huang


In 3D reconstruction, how to recover high-resolution 3D surface details from the existing low-resolution 3D surface is still a challenging problem. Due to the unstructured and irregular characteristics of 3D data, it is usually difficult to obtain extremely dense 3D surface and capture detailed local features. To tackle this problem, this article introduces an effective deep convolutional network, namely RGFTNet, to perform 3D surface super-resolution in 2D normal domain. To restore dense surface details and learn sharp geometry structures simultaneously, a shape prior acquisition method is designed to achieve the high-quality shape normal from the input low-resolution one. Subsequently, the extracted shape normal as the shape prior is incorporated into a deep convolutional network through the Geometric Feature Transform (GFT) layer. Experimental results show the superiority of the proposed RGFTNet over several recent advances on both the computer-generated and the real-world data.

PDF (protected)

  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021