论文标题
建模间接照明以逆渲染
Modeling Indirect Illumination for Inverse Rendering
论文作者
论文摘要
隐式神经表示和可区分渲染的最新进展使得可以从未知的静态照明下捕获的多视图RGB图像中同时恢复对象的几何形状和材料。尽管取得了令人鼓舞的结果,但很少在以前的方法中对间接照明进行建模,因为它需要昂贵的递归路径追踪,这使得逆渲染计算在计算上棘手。在本文中,我们提出了一种新颖的方法来有效恢复空间变化的间接照明。关键的见解是,间接照明可以方便地源自从输入图像中学到的神经辐射场,而不是通过直接照明和材料共同估计。通过正确建模间接照明和直接照明的可见性,可以恢复反射和无阴影的反照率。关于合成和实际数据的实验表明,与以前的工作相比,我们的方法的出色表现及其在新颖的观点和照明下合成现实渲染的能力。我们的代码和数据可在https://zju3dv.github.io/invrender/上找到。
Recent advances in implicit neural representations and differentiable rendering make it possible to simultaneously recover the geometry and materials of an object from multi-view RGB images captured under unknown static illumination. Despite the promising results achieved, indirect illumination is rarely modeled in previous methods, as it requires expensive recursive path tracing which makes the inverse rendering computationally intractable. In this paper, we propose a novel approach to efficiently recovering spatially-varying indirect illumination. The key insight is that indirect illumination can be conveniently derived from the neural radiance field learned from input images instead of being estimated jointly with direct illumination and materials. By properly modeling the indirect illumination and visibility of direct illumination, interreflection- and shadow-free albedo can be recovered. The experiments on both synthetic and real data demonstrate the superior performance of our approach compared to previous work and its capability to synthesize realistic renderings under novel viewpoints and illumination. Our code and data are available at https://zju3dv.github.io/invrender/.