论文标题
语义图像合成通过扩散模型
Semantic Image Synthesis via Diffusion Models
论文作者
论文摘要
与生成的对抗网(GAN)相比,脱氧扩散概率模型(DDPM)在各种图像生成任务中取得了显着成功。关于语义图像合成的最新工作主要遵循了基于GAN的方法,这可能导致产生的图像的质量或多样性不令人满意。在本文中,我们提出了一个基于DDPM的新型框架,用于语义图像合成。与先前的条件扩散模型不同,将语义布局和嘈杂的图像作为输入添加到U-NET结构中,该结构可能无法完全利用输入语义掩码中的信息,我们的框架处理语义布局和嘈杂的图像不同。它通过多层自适应归一化操作员将语义布局馈送到U-NET结构的编码器中。为了进一步提高语义图像综合中的发电质量和语义解释性,我们介绍了无分类器的指导采样策略,该策略承认采样过程的无条件模型的得分。在四个基准数据集上进行的广泛实验证明了我们提出的方法的有效性,从而在富裕(FID)和多样性(LPIPS)方面实现了最先进的性能。我们的代码和预估计的模型可在https://github.com/weilunwang/semantic-diffusion-model上找到。
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks compared with Generative Adversarial Nets (GANs). Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches, which may lead to unsatisfactory quality or diversity of generated images. In this paper, we propose a novel framework based on DDPM for semantic image synthesis. Unlike previous conditional diffusion model directly feeds the semantic layout and noisy image as input to a U-Net structure, which may not fully leverage the information in the input semantic mask, our framework processes semantic layout and noisy image differently. It feeds noisy image to the encoder of the U-Net structure while the semantic layout to the decoder by multi-layer spatially-adaptive normalization operators. To further improve the generation quality and semantic interpretability in semantic image synthesis, we introduce the classifier-free guidance sampling strategy, which acknowledge the scores of an unconditional model for sampling process. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our proposed method, achieving state-of-the-art performance in terms of fidelity (FID) and diversity (LPIPS). Our code and pretrained models are available at https://github.com/WeilunWang/semantic-diffusion-model.