论文标题
恢复神经隐式表面重建的细节
Recovering Fine Details for Neural Implicit Surface Reconstruction
论文作者
论文摘要
关于隐式神经表示的最新著作已取得了重大进步。使用音量渲染的学习隐式神经表面在没有3D监督的情况下在多视图重建中越来越受欢迎。但是,由于几何形状和外观表示的基本歧义,准确恢复细节仍然具有挑战性。在本文中,我们介绍了D-Neus,这是一种能够恢复精细的几何细节的体积渲染基碱神经隐式表面重建方法,该方法通过针对增强的重建质量的另外两个损失功能扩展了NEUS。首先,我们鼓励从α合成的渲染表面点具有零签名的距离值,从而减轻了从转化SDF到密度进行体积渲染的几何偏差。其次,我们在表面点上施加了多视图特征一致性,这是通过从沿射线采样点插值的SDF零交叉来得出的。广泛的定量和定性结果表明,我们的方法通过细节重建高智能表面,并表现出色。
Recent works on implicit neural representations have made significant strides. Learning implicit neural surfaces using volume rendering has gained popularity in multi-view reconstruction without 3D supervision. However, accurately recovering fine details is still challenging, due to the underlying ambiguity of geometry and appearance representation. In this paper, we present D-NeuS, a volume rendering-base neural implicit surface reconstruction method capable to recover fine geometry details, which extends NeuS by two additional loss functions targeting enhanced reconstruction quality. First, we encourage the rendered surface points from alpha compositing to have zero signed distance values, alleviating the geometry bias arising from transforming SDF to density for volume rendering. Second, we impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays. Extensive quantitative and qualitative results demonstrate that our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.