论文标题
使用深3D网络按需固体纹理合成
On Demand Solid Texture Synthesis Using Deep 3D Networks
论文作者
论文摘要
本文介绍了一种基于深度学习框架的新型方法合成体积纹理合成的方法,该框架允许以交互速率生成高质量的3D数据。基于一些纹理的示例图像,对生成网络进行了训练,以合成任意大小的固体纹理的相干部分,这些尺寸的固体纹理沿着某些方向重现了示例的视觉特征。为了应对GPU高分辨率和3D处理所固有的内存限制和计算复杂性,在训练阶段仅生成称为“切片”的2D纹理。这些合成纹理通过基于预训练的深网的感知损失函数与示例图像进行了比较。提出的网络非常轻(小于100k参数),因此它仅需要可持续的培训(即几个小时),并且能够在一个GPU上进行非常快速的生成(约256^3 $ voxels)。与空间种子的PRNG集成了所提出的发电机网络直接返回一个3D坐标的RGB值。合成的体积具有良好的视觉效果,至少等同于基于最新的补丁方法。它们自然地无缝地易用,并且可以平行地完全生成。
This paper describes a novel approach for on demand volumetric texture synthesis based on a deep learning framework that allows for the generation of high quality 3D data at interactive rates. Based on a few example images of textures, a generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes that reproduce the visual characteristics of the examples along some directions. To cope with memory limitations and computation complexity that are inherent to both high resolution and 3D processing on the GPU, only 2D textures referred to as "slices" are generated during the training stage. These synthetic textures are compared to exemplar images via a perceptual loss function based on a pre-trained deep network. The proposed network is very light (less than 100k parameters), therefore it only requires sustainable training (i.e. few hours) and is capable of very fast generation (around a second for $256^3$ voxels) on a single GPU. Integrated with a spatially seeded PRNG the proposed generator network directly returns an RGB value given a set of 3D coordinates. The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches. They are naturally seamlessly tileable and can be fully generated in parallel.