论文标题
nerfplayer:具有分解神经辐射场的流动动态场景表示
NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields
论文作者
论文摘要
在现实世界中,在VR中自由自由的空间空间进行视觉探索一直是一个长期的追求。当仅使用少数几个甚至单个RGB摄像机来捕获动态场景时,任务尤其吸引人。为此,我们提出了一个有效的框架,能够快速重建,紧凑的建模和流媒体渲染。首先,我们建议根据时间特征分解4D时空空间。 4D空间中的点与属于三类的概率有关:静态,变形和新区域。每个区域由单独的神经场表示和正规。其次,我们提出了一种基于混合表示的特征流方案,以有效地对神经场进行建模。我们的方法是在单手摄像机和多摄像机阵列捕获的动态场景中评估的方法,从质量和速度上获得可比性或出色的渲染性能,可与最近的最先进的方法相媲美,在每帧10秒内实现重建,每帧和交互式呈现。
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering.