Multi-NeuS: 3D Head Portraits from Single Image with Neural Implicit Functions

Egor Burkov1 Ruslan Rakhimov1 Aleksandr Safin1 Evgeny Burnaev12 Victor Lempitsky3
1Skoltech 2AIRI 3Cinemersive Labs

Abstract

We present an approach for the reconstruction of textured 3D meshes of human heads from one or few views. Since such few-shot reconstruction is underconstrained, it requires prior knowledge which is hard to impose on traditional 3D reconstruction algorithms. In this work, we rely on the recently introduced 3D representation – neural implicit functions – which, being based on neural networks, allows to naturally learn priors about human heads from data, and is directly convertible to textured mesh. Namely, we extend NeuS, a state-of-the-art neural implicit function formulation, to represent multiple objects of a class (human heads in our case) simultaneously. The underlying neural net architecture is designed to learn the commonalities among these objects and to generalize to unseen ones. Our model is trained on just a hundred smartphone videos and does not require any scanned 3D data. Afterwards, the model can fit novel heads in the few-shot or one-shot modes with good results.

Additional Results