3D reconstruction of insects: an improved multifocus stacking and an evaluation of learning-based MVS approaches |
---|
Authors: Chang Xu, Jiayuan Liu, Chuong Nguyen, Fabien Castan, Benoit Maujean and Simone Gasparini |
Abstract: 3D reconstruction using Structure-from-Motion and Multi-View-Stereo has become a mature technology used in production in a wide range of fields. However, many problems remain for challenging cases such as flat-colored surfaces, blur (due to motion or focus), non-Lambertian materials or problematic geometric structures such as thin structures. Most of these problems are combined in the context of insect 3D scanning. In this paper, we evaluate and compare the classical reconstruction pipeline with new deep learning based reconstructions for small and complex objects. We have selected 3 methods to evaluate: the classical Multi-View-Stereo (MVS), the Neural Radiance Fields (NeRF) and the Neural Sparse Voxel Fields (NSVF). We found that multifocus stacking for image alignment using lens equation provides the most accurate 3D reconstructed model. Using 3D calibration target for image alignment also significantly reduces the reconstruction error. Furthermore, uniform pose distribution when capturing images also leads to more accurate 3D reconstructed model. The accuracy of pose estimation, by structure from motion, and resulting 3D reconstructed model is greatly dependent on image resolution and the shape of the specimen. Lower image resolution significantly reduces the accuracy of MVS, but not as much for NeRF and NSVF. Flat geometry causes significant pose estimation errors than more ``round'' shapes. Finally, MVS fails to work with objects with reflection and transparency, while NeRF and NVSF successfully reconstruct such objects. |
PDF (protected) |