论文标题

StyleV:多样化和高保真紫外线图生成模型

StyleUV: Diverse and High-fidelity UV Map Generative Model

论文作者

Lee, Myunggi, Cho, Wonwoong, Kim, Moonheum, Inouye, David, Kwak, Nojun

论文摘要

近年来,使用3D形态模型(3DMM)重建野外3D人脸。尽管大多数先前的工作都集中在估计更健壮和准确的几何形状上,但对提高纹理模型的质量的关注很少。同时,随着生成对抗网络(GAN)的出现,重建现实的2D图像取得了巨大进展。最近的工作表明,经过丰富的高质量紫外线地图训练的甘斯可以产生高保真质地,而不是现有方法产生的质地。但是,获取如此高质量的紫外线图很难,因为它们的获取价格昂贵,需要艰苦的流程才能完善。在这项工作中,我们提出了一种新型的UV地图生成模型,该模型学会生成多样化和现实的合成紫外线地图,而无需需要高质量的紫外线图进行训练。我们提出的框架可以通过利用gan和可区分的渲染器的组合来仅使用野外图像(即不需要UV地图)进行训练。定量和定性评估都表明,与现有方法相比,我们提出的纹理模型会产生更多样化和更高的忠诚度纹理。

Reconstructing 3D human faces in the wild with the 3D Morphable Model (3DMM) has become popular in recent years. While most prior work focuses on estimating more robust and accurate geometry, relatively little attention has been paid to improving the quality of the texture model. Meanwhile, with the advent of Generative Adversarial Networks (GANs), there has been great progress in reconstructing realistic 2D images. Recent work demonstrates that GANs trained with abundant high-quality UV maps can produce high-fidelity textures superior to those produced by existing methods. However, acquiring such high-quality UV maps is difficult because they are expensive to acquire, requiring laborious processes to refine. In this work, we present a novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training. Our proposed framework can be trained solely with in-the-wild images (i.e., UV maps are not required) by leveraging a combination of GANs and a differentiable renderer. Both quantitative and qualitative evaluations demonstrate that our proposed texture model produces more diverse and higher fidelity textures compared to existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源