论文标题

PC-GANS:用于泛滥的渐进赔偿生成对抗网络

PC-GANs: Progressive Compensation Generative Adversarial Networks for Pan-sharpening

论文作者

Xing, Yinghui, Yang, Shuyuan, Wang, Song, Zhang, Yan, Zhang, Yanning

论文摘要

多光谱和全型图像的融合始终被称为pansharpening。大多数可用的基于深度学习的pan-sharpening方法通过一步方案增强了多光谱图像,这在很大程度上取决于网络的重建能力。但是,遥感图像总是具有很大的变化,因此,这些一步方法容易受到误差积累的影响,因此无法保留空间细节以及光谱信息。在本文中,我们提出了一种新型的两步模型,用于泛叠式模型,该模型通过空间和光谱信息的进行性补偿来锐化MS图像。首先,深层多尺度引导的生成对抗网络用于初步增强MS图像的空间分辨率。从粗糙域中的预交换MS图像开始,我们的方法随后逐步完善了具有反向体系结构的几个生成对抗网络(GAN)的空间和光谱残差。整个模型由三重gan组成,基于特定的架构,联合补偿损失函数的设计旨在使三重甘斯能够同时训练。此外,本文提出的空间谱系残留补偿结构可以扩展到其他pan-sharpening方法,以进一步增强其融合结果。在不同的数据集上进行了广泛的实验,结果证明了我们提出的方法的有效性和效率。

The fusion of multispectral and panchromatic images is always dubbed pansharpening. Most of the available deep learning-based pan-sharpening methods sharpen the multispectral images through a one-step scheme, which strongly depends on the reconstruction ability of the network. However, remote sensing images always have large variations, as a result, these one-step methods are vulnerable to the error accumulation and thus incapable of preserving spatial details as well as the spectral information. In this paper, we propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information. Firstly, a deep multiscale guided generative adversarial network is used to preliminarily enhance the spatial resolution of the MS image. Starting from the pre-sharpened MS image in the coarse domain, our approach then progressively refines the spatial and spectral residuals over a couple of generative adversarial networks (GANs) that have reverse architectures. The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously. Moreover, the spatial-spectral residual compensation structure proposed in this paper can be extended to other pan-sharpening methods to further enhance their fusion results. Extensive experiments are performed on different datasets and the results demonstrate the effectiveness and efficiency of our proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源