论文标题

裁缝:制作用户定义的时装设计

TailorGAN: Making User-Defined Fashion Designs

论文作者

Chen, Lele, Tian, Justin, Li, Guo, Wu, Cheng-Haw, King, Erh-Kan, Chen, Kuan-Ting, Hsieh, Shao-Hang, Xu, Chenliang

论文摘要

属性编辑已成为计算机视觉的重要且新兴的话题。在本文中,我们考虑一个任务:给定具有目标属性(项圈/套筒)的参考服装映像A和另一个图像B,生成了一个逼真的图像,该图像结合了参考A的纹理和引用B的新属性B。高度令人费解的属性和缺乏配对数据的缺乏是对任务的主要挑战。为了克服这些局限性,我们提出了一个新型的自我监督模型,以合成具有分离的属性(例如衣领和袖子)的服装图像,而无需配对数据。我们的方法包括重建学习步骤和对抗性学习步骤。该模型通过重建学习学习纹理和位置信息。而且,该模型的能力被推广,可以通过对抗性学习实现单属性操纵。同时,我们撰写了一个名为Garmentset的新数据集,并在干净的服装图像上用地标和袖子进行了注释。该数据集和现实世界样本上的广泛实验表明,我们的方法可以比定量和定性比较中的最新方法更好地综合结果。

Attribute editing has become an important and emerging topic of computer vision. In this paper, we consider a task: given a reference garment image A and another image B with target attribute (collar/sleeve), generate a photo-realistic image which combines the texture from reference A and the new attribute from reference B. The highly convoluted attributes and the lack of paired data are the main challenges to the task. To overcome those limitations, we propose a novel self-supervised model to synthesize garment images with disentangled attributes (e.g., collar and sleeves) without paired data. Our method consists of a reconstruction learning step and an adversarial learning step. The model learns texture and location information through reconstruction learning. And, the model's capability is generalized to achieve single-attribute manipulation by adversarial learning. Meanwhile, we compose a new dataset, named GarmentSet, with annotation of landmarks of collars and sleeves on clean garment images. Extensive experiments on this dataset and real-world samples demonstrate that our method can synthesize much better results than the state-of-the-art methods in both quantitative and qualitative comparisons.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源