论文标题

RESNERF:室内场景的几何学引导残留神经辐射场小说合成

ResNeRF: Geometry-Guided Residual Neural Radiance Field for Indoor Scene Novel View Synthesis

论文作者

Xiao, Yuting, Zhao, Yiqun, Xu, Yanyu, Gao, Shenghua

论文摘要

我们代表Resnerf,这是一个新颖的几何学导入的两阶段框架,用于室内场景小说综合。请注意,良好的几何形状会极大地提高新型视图综合的性能,并避免几何歧义问题,我们建议根据场景几何形状估计的基本密度和由几何学参数估计的基本密度来表征场景的密度分布。在第一阶段,我们专注于基于SDF表示的几何重建,这将导致场景的良好几何表面以及急剧的密度。在第二阶段,基于在第一阶段学习的SDF来学习残差密度,以编码有关外观的更多细节。通过这种方式,我们的方法可以更好地了解高保真小说合成的几何形状,同时保留3D结构。在大规模室内场景上进行的实验具有许多较少且无纹理的区域,表明,凭借良好的3D表面,我们的方法可实现新的视图合成的最新性能。

We represent the ResNeRF, a novel geometry-guided two-stage framework for indoor scene novel view synthesis. Be aware of that a good geometry would greatly boost the performance of novel view synthesis, and to avoid the geometry ambiguity issue, we propose to characterize the density distribution of the scene based on a base density estimated from scene geometry and a residual density parameterized by the geometry. In the first stage, we focus on geometry reconstruction based on SDF representation, which would lead to a good geometry surface of the scene and also a sharp density. In the second stage, the residual density is learned based on the SDF learned in the first stage for encoding more details about the appearance. In this way, our method can better learn the density distribution with the geometry prior for high-fidelity novel view synthesis while preserving the 3D structures. Experiments on large-scale indoor scenes with many less-observed and textureless areas show that with the good 3D surface, our method achieves state-of-the-art performance for novel view synthesis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源