论文标题
CR填充:用辅助式重建的生成图像对
CR-Fill: Generative Image Inpainting with Auxiliary Contexutal Reconstruction
论文作者
论文摘要
最近的深层生成涂层方法使用注意层,使生成器可以从已知区域明确借用特征贴片以完成缺失的区域。由于缺乏对缺失区域和已知区域之间对应关系的对应关系的监督信号,因此可能无法找到适当的参考特征,这通常会导致结果中的伪像。此外,它在推理过程中将计算整个特征图的配对相似性,从而带来了重要的计算开销。为了解决这个问题,我们建议通过对辅助上下文重建任务的联合培训来向无注意发电机传授这种借贷行为,这鼓励即使在周围地区重建时,也会鼓励生成的产出是合理的。辅助分支可以看作是可学习的损失函数,即称为上下文重建(CR)损失,其中查询参考特征功能相似性和基于参考的重建器与inpainting Genertator共同优化。辅助分支(即CR损失)仅在训练期间才需要,并且在推断期间仅需要介入的发生器。实验结果表明,在定量和视觉性能方面,提出的介绍模型与最先进的方法进行了比较。
Recent deep generative inpainting methods use attention layers to allow the generator to explicitly borrow feature patches from the known region to complete a missing region. Due to the lack of supervision signals for the correspondence between missing regions and known regions, it may fail to find proper reference features, which often leads to artifacts in the results. Also, it computes pair-wise similarity across the entire feature map during inference bringing a significant computational overhead. To address this issue, we propose to teach such patch-borrowing behavior to an attention-free generator by joint training of an auxiliary contextual reconstruction task, which encourages the generated output to be plausible even when reconstructed by surrounding regions. The auxiliary branch can be seen as a learnable loss function, i.e. named as contextual reconstruction (CR) loss, where query-reference feature similarity and reference-based reconstructor are jointly optimized with the inpainting generator. The auxiliary branch (i.e. CR loss) is required only during training, and only the inpainting generator is required during the inference. Experimental results demonstrate that the proposed inpainting model compares favourably against the state-of-the-art in terms of quantitative and visual performance.