论文标题
Semi2i:遥感数据适应域适应的语义上一致的图像到图像翻译
SemI2I: Semantically Consistent Image-to-Image Translation for Domain Adaptation of Remote Sensing Data
论文作者
论文摘要
尽管已证明卷积神经网络是从遥感图像中生成高质量图的有效工具,但是当训练和测试数据之间存在较大的域移动时,它们的性能会显着恶化。为了解决此问题,我们提出了一种新的数据增强方法,该方法将测试数据的样式转移到使用生成对抗网络的培训数据中。我们的语义细分框架包括先培训来自真实培训数据的U-NET,然后对拟议方法生成的测试风格化的假训练数据进行微调。我们的实验结果证明,我们的框架的表现优于现有域适应方法。
Although convolutional neural networks have been proven to be an effective tool to generate high quality maps from remote sensing images, their performance significantly deteriorates when there exists a large domain shift between training and test data. To address this issue, we propose a new data augmentation approach that transfers the style of test data to training data using generative adversarial networks. Our semantic segmentation framework consists in first training a U-net from the real training data and then fine-tuning it on the test stylized fake training data generated by the proposed approach. Our experimental results prove that our framework outperforms the existing domain adaptation methods.