论文标题
点-NERF:基于点的神经辐射场
Point-NeRF: Point-based Neural Radiance Fields
论文作者
论文摘要
诸如NERF之类的体积神经渲染方法会产生高质量的视图综合结果,但通过现场进行了优化,从而导致过度重建时间。另一方面,深度视图立体方法可以通过直接网络推断快速重建场景几何形状。 Point-nerf通过使用神经3D点云和相关的神经特征来结合这两种方法的优势,以建模辐射场。可以通过在基于射线行进的渲染管道中汇总现场表面附近的神经点特征来有效地渲染点-NERF。此外,可以通过直接推断预先训练的深层网络来初始化点-NERF来产生神经点云。可以通过更快的训练时间更快地超过30倍的NERF的视觉质量。点-NERF可以与其他3D重建方法结合使用,并通过新颖的修剪和增长的机制在这种方法中处理错误和异常值。 DTU,NERF合成,扫描板和储罐和寺庙数据集的实验证明了点-NERF可以超过现有方法并实现最新结果。
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.