Attacking Image Classifiers To Generate 3D Textures


Camilo Andres Pestana and Ajmal Mian


Deep learning has been successfully used for many 2D image synthesis tasks such as in-painting, super-resolution, and image-to-image translation. Generative tasks in the 3D space are also becoming popular such as 3D volume generation from images, image based styling and subsequent texture mapping on 3D meshes, and novel view synthesis from single/multiple views. These are mainly possible due to advances in differentiable neural rendering. Some recent works also suggest the feasibility of generative tasks on 2D images through adversarial attacks, a topic that remains largely unexplored in the 3D domain. This paper bridges the gap and shows the potential of adversarial attacks for the task of 3D texture generation. It proposes the first of its kind method to re-purpose visual classifiers trained on images for the task of generating realistic textures for 3D meshes without the need for style images, multi-view images or retraining. Instead, our schema uses a targeted adversarial attack to directly minimize the classification loss of an ensemble of models whose gradients are backpropagated to estimate the texture for an input 3D mesh. We show promising results on 3D meshes and also propose a metric to evaluate the texture quality.

PDF (protected)

  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021