论文标题

Tuigan:学习多功能图像到图像翻译,带有两个未配对的图像

TuiGAN: Learning Versatile Image-to-Image Translation with Two Unpaired Images

论文作者

Lin, Jianxin, Pang, Yingxue, Xia, Yingce, Chen, Zhibo, Luo, Jiebo

论文摘要

无监督的图像到图像翻译(UI2I)任务涉及在没有配对图像的两个域之间学习映射。尽管现有的UI2I方法通常需要来自不同领域的许多未配对的图像进行培训,但在许多情况下,培训数据非常有限。在本文中,我们认为,即使每个域包含一个图像,UI2i仍然可以实现。为此,我们提出了Tuigan,这是一种生成模型,仅对两个未配对的图像进行训练,并等同于一声无监督的学习。使用Tuigan,以粗到1的方式翻译图像,其中生成的图像逐渐从全球结构到本地细节。我们进行了广泛的实验,以验证我们的多功能方法可以在各种UI2I任务上胜过强大的基线。此外,Tuigan能够通过接受足够数据训练的最先进的UI2I模型来实现可比的性能。

An unsupervised image-to-image translation (UI2I) task deals with learning a mapping between two domains without paired images. While existing UI2I methods usually require numerous unpaired images from different domains for training, there are many scenarios where training data is quite limited. In this paper, we argue that even if each domain contains a single image, UI2I can still be achieved. To this end, we propose TuiGAN, a generative model that is trained on only two unpaired images and amounts to one-shot unsupervised learning. With TuiGAN, an image is translated in a coarse-to-fine manner where the generated image is gradually refined from global structures to local details. We conduct extensive experiments to verify that our versatile method can outperform strong baselines on a wide variety of UI2I tasks. Moreover, TuiGAN is capable of achieving comparable performance with the state-of-the-art UI2I models trained with sufficient data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源