论文标题

为什么有条件的生成模型比无条件模型更好?

Why Are Conditional Generative Models Better Than Unconditional Ones?

论文作者

Bao, Fan, Li, Chongxuan, Sun, Jiacheng, Zhu, Jun

论文摘要

广泛的经验证据表明,通过利用数据标签,有条件的生成模型比无条件的模型更容易训练和表现更好。因此,基于得分的扩散模型也是如此。在本文中,我们正式分析了现象,并确定条件学习的关键是正确划分数据。受分析的启发,我们提出了自我调节扩散模型(SCDM),该模型是根据由K-Means算法聚集在索引的条件上的,该算法是在以自我监督方式预先训练的模型提取的功能上。 SCDM显着改善了各个数据集的无条件模型,并在没有标签的情况下达到了ImageNet 64x64的创纪录的FID为3.94。此外,SCDM比CIFAR10上的相应条件模型的FID略好。

Extensive empirical evidence demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data. So do score-based diffusion models. In this paper, we analyze the phenomenon formally and identify that the key of conditional learning is to partition the data properly. Inspired by the analyses, we propose self-conditioned diffusion models (SCDM), which is trained conditioned on indices clustered by the k-means algorithm on the features extracted by a model pre-trained in a self-supervised manner. SCDM significantly improves the unconditional model across various datasets and achieves a record-breaking FID of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly better FID than the corresponding conditional model on CIFAR10.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源