LatentHuman: Shape-and-Pose Disentangled Latent Representations for Human Bodies |
---|
Authors: Sandro Lombardi, Bangbang Yang, Tianxing Fan, Hujun Bao, Guofeng Zhang, Marc Pollefeys and Zhaopeng Cui |
Abstract: 3D representation and reconstruction of human bodies have been studied for a long time in computer vision. Traditional methods rely mostly on parametric statistical linear models, limiting the space of possible bodies to linear combinations. It is only recently that some approaches try to leverage neural implicit representations for human body modeling, while these approaches are either limited by representation capability or not physically meaningful and controllable. In this work, we propose a novel neural implicit representations for human bodies which is fully differentiable and optimizable with disentangled shape and pose latent spaces. Contrary to prior work, our representation allows for parametric pose inputs, which makes the representation controllable, e.g., for tasks like animation, while simultaneously allowing optimization with respect to the pose and shape, e.g., for tasks like 3D fitting and pose tracking. Moreover, our model can be trained and fine-tuned directly on non-watertight data with novel loss functions. Experiments demonstrate the improved 3D reconstruction performance over SoTA approaches and show the applicability of our method to shape interpolation, model fitting, and pose tracking. |
PDF (protected) |