Attacking Image Classifiers To Generate 3D Textures |
---|
Authors: Camilo Andres Pestana and Ajmal Mian |
Abstract: Deep learning has been successfully used for many 2D image synthesis tasks such as in-painting, super-resolution, and image-to-image translation. Generative tasks in the 3D space are also becoming popular such as 3D volume generation from images, image based styling and subsequent texture mapping on 3D meshes, and novel view synthesis from single/multiple views. These are mainly possible due to advances in differentiable neural rendering. Some recent works also suggest the feasibility of generative tasks on 2D images through adversarial attacks, a topic that remains largely unexplored in the 3D domain. This paper bridges the gap and shows the potential of adversarial attacks for the task of 3D texture generation. It proposes the first of its kind method to re-purpose visual classifiers trained on images for the task of generating realistic textures for 3D meshes without the need for style images, multi-view images or retraining. Instead, our schema uses a targeted adversarial attack to directly minimize the classification loss of an ensemble of models whose gradients are backpropagated to estimate the texture for an input 3D mesh. We show promising results on 3D meshes and also propose a metric to evaluate the texture quality. |
PDF (protected) |