论文标题
神经资产:交互式环境的体积对象捕获和渲染
Neural Assets: Volumetric Object Capture and Rendering for Interactive Environments
论文作者
论文摘要
创建现实的虚拟资产是一个耗时的过程:它通常涉及设计对象的艺术家,然后花费大量精力来调整其外观。复杂的细节和某些效果,例如地下散射,使用实时BRDF避免表示,使得无法完全捕获某些对象的外观。受神经渲染的最新进展的启发,我们提出了一种在日常环境中忠实,快速地捕获现实世界对象的方法。我们使用一种新颖的神经表示来重建体积效应,例如半透明的对象部分,并保留逼真的物体外观。为了支持实时渲染而不损害渲染质量,我们的模型使用了功能网格和小型MLP解码器,该解码器被转移到具有交互式帧速率的有效着色器代码中。这导致了所提出的神经资产与现有网格环境和对象的无缝集成。由于使用了标准的着色器代码渲染,因此在许多现有的硬件和软件系统中都可以移植。
Creating realistic virtual assets is a time-consuming process: it usually involves an artist designing the object, then spending a lot of effort on tweaking its appearance. Intricate details and certain effects, such as subsurface scattering, elude representation using real-time BRDFs, making it impossible to fully capture the appearance of certain objects. Inspired by the recent progress of neural rendering, we propose an approach for capturing real-world objects in everyday environments faithfully and fast. We use a novel neural representation to reconstruct volumetric effects, such as translucent object parts, and preserve photorealistic object appearance. To support real-time rendering without compromising rendering quality, our model uses a grid of features and a small MLP decoder that is transpiled into efficient shader code with interactive framerates. This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects. Thanks to the use of standard shader code rendering is portable across many existing hardware and software systems.