论文标题
从多视图几何学学习神经辐射场
Learning Neural Radiance Fields from Multi-View Geometry
论文作者
论文摘要
我们提出了一个称为MVG-NERF的框架,该框架结合了基于图像的3D重建的经典多视图几何算法和神经辐射场(NERF)。 NERF彻底改变了隐式3D表示的领域,这主要是由于可以实现高质量和几何学意识到的新颖观点综合的容量渲染公式。但是,场景的基本几何形状在训练过程中没有明确限制,因此在提取带有游行立方体的网格时会导致嘈杂和不正确的结果。为此,我们建议从经典的3D重建管道作为几何阶段的pixelwise深度和正态,以指导NERF优化。在训练过程中,此类先验被用作伪界真理,以提高估计的下面表面的质量。此外,每个像素都由置信值加权基于前向后再投入误差,以实现额外的鲁棒性。现实世界数据的实验结果证明了这种方法在从图像中获得清洁3D网格的有效性,同时在新型视图合成中保持竞争性能。
We present a framework, called MVG-NeRF, that combines classical Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction. NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable volumetric rendering formulation that enables high-quality and geometry-aware novel view synthesis. However, the underlying geometry of the scene is not explicitly constrained during training, thus leading to noisy and incorrect results when extracting a mesh with marching cubes. To this end, we propose to leverage pixelwise depths and normals from a classical 3D reconstruction pipeline as geometric priors to guide NeRF optimization. Such priors are used as pseudo-ground truth during training in order to improve the quality of the estimated underlying surface. Moreover, each pixel is weighted by a confidence value based on the forward-backward reprojection error for additional robustness. Experimental results on real-world data demonstrate the effectiveness of this approach in obtaining clean 3D meshes from images, while maintaining competitive performances in novel view synthesis.