Sur2f: A Hybrid Representation for High-Quality and Efficient Surface Reconstruction from Multi-view Images


Zhangjin Huang1*, Zhihao Liang1*, Haojie Zhang1, Yangkai Lin1, Kui Jia2

1South China University of Technology
2School of Data Science, The Chinese University of Hong Kong, Shenzhen

Paper Code

Abstract




Multi-view surface reconstruction is an ill-posed, inverse problem in 3D vision research. It involves modeling the geometry and appearance with appropriate surface representations. Most of the existing methods rely either on explicit meshes, using surface rendering of meshes for reconstruction, or on implicit field functions, using volume rendering of the fields for reconstruction. The two types of representations in fact have their respective merits. In this work, we propose a new hybrid representation, termed Sur2f, aiming to better benefit from both representations in a complementary manner. Technically, we learn two parallel streams of an implicit signed distance field and an explicit surrogate surface Sur2f mesh, and unify volume rendering of the implicit signed distance function (SDF) and surface rendering of the surrogate mesh with a shared, neural shader; the unified shading promotes their convergence to the same, underlying surface. We synchronize learning of the surrogate mesh by driving its deformation with functions induced from the implicit SDF. In addition, the synchronized surrogate mesh enables surface-guided volume sampling, which greatly improves the sampling efficiency per ray in volume rendering. We conduct thorough experiments showing that Sur$^2$f outperforms existing reconstruction methods and surface representations, including hybrid ones, in terms of both recovery quality and recovery efficiency.


Contributions & Method




Multi-view Surface Reconstruction and Rendering




Transferring Appearance after Multi-view Reconstruction




Physically Based Inverse Rendering




Text-to-3D Generation