论文标题
神经:用于重新合成的神经反射率和可见性领域
NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis
论文作者
论文摘要
我们提出了一种方法,该方法将输入一组被无约束的已知照明照明的场景图像,并作为输出3D表示形式产生,该表示可以从新的视点在任意照明条件下呈现。我们的方法将场景表示为连续的体积函数参数为MLP,其输入是3D位置,并且其输出是该输入位置处的以下场景属性:音量密度,表面正常,材料参数,沿任何方向沿任何方向的第一表面交点的距离,以及任何方向的外部环境的可见性。这些共同使我们能够在任意照明下对物体进行新颖的看法,包括间接照明效应。预测的可见性和表面交点场对于我们模型在训练过程中模拟直接和间接照明的能力至关重要,因为先前工作使用的蛮力技术对于带有单个光线的受控设置以外的照明条件非常棘手。我们的方法的表现优于恢复可靠的3D场景表示形式的替代方法,并且在复杂的照明设置中表现良好,这对先前的工作构成了重大挑战。
We present a method that takes as input a set of images of a scene illuminated by unconstrained known lighting, and produces as output a 3D representation that can be rendered from novel viewpoints under arbitrary lighting conditions. Our method represents the scene as a continuous volumetric function parameterized as MLPs whose inputs are a 3D location and whose outputs are the following scene properties at that input location: volume density, surface normal, material parameters, distance to the first surface intersection in any direction, and visibility of the external environment in any direction. Together, these allow us to render novel views of the object under arbitrary lighting, including indirect illumination effects. The predicted visibility and surface intersection fields are critical to our model's ability to simulate direct and indirect illumination during training, because the brute-force techniques used by prior work are intractable for lighting conditions outside of controlled setups with a single light. Our method outperforms alternative approaches for recovering relightable 3D scene representations, and performs well in complex lighting settings that have posed a significant challenge to prior work.