论文标题

面部生成的一声域改编

One-Shot Domain Adaptation For Face Generation

论文作者

Yang, Chao, Lim, Ser-Nam

论文摘要

在本文中,我们提出了一个框架,能够生成面部图像,该框架属于与给定的一击示例相同的分布。我们利用已经学习通用面部分布的预训练的样式模型。鉴于单杆目标,我们开发了一种迭代优化方案,该方案迅速调整了模型的权重,以将输出的高级分布转移到目标。为了生成相同分布的图像,我们引入了一种样式混合技术,该技术将低级统计信息从目标转移到模型随机生成的面。因此,我们能够生成从通用人体的分布和单镜头示例中继承的无限数量的面孔。新生成的面孔可以用作其他下游任务的增强培训数据。这种设置很有吸引力,因为它需要在目标域中标记很少甚至一个例子,这通常是由各种未知和独特的分布造成的现实世界面部操作的情况,每个分布都具有极低的患病率。我们展示了我们的单发方法检测面部操作的有效性,并在定性和定量上将其与其他少数弹药域的适应方法进行了比较。

In this paper, we propose a framework capable of generating face images that fall into the same distribution as that of a given one-shot example. We leverage a pre-trained StyleGAN model that already learned the generic face distribution. Given the one-shot target, we develop an iterative optimization scheme that rapidly adapts the weights of the model to shift the output's high-level distribution to the target's. To generate images of the same distribution, we introduce a style-mixing technique that transfers the low-level statistics from the target to faces randomly generated with the model. With that, we are able to generate an unlimited number of faces that inherit from the distribution of both generic human faces and the one-shot example. The newly generated faces can serve as augmented training data for other downstream tasks. Such setting is appealing as it requires labeling very few, or even one example, in the target domain, which is often the case of real-world face manipulations that result from a variety of unknown and unique distributions, each with extremely low prevalence. We show the effectiveness of our one-shot approach for detecting face manipulations and compare it with other few-shot domain adaptation methods qualitatively and quantitatively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源