论文标题
语义上准确的超分辨率生成对抗网络
Semantically Accurate Super-Resolution Generative Adversarial Networks
论文作者
论文摘要
这项工作通过在培训生成对抗网络(GAN)中共同考虑两者的表现,解决了语义细分和图像超分辨率的问题。我们提出了一种新颖的体系结构和特定于领域的特征损失,从而使超分辨率可以作为增强下游计算机视觉任务(特别是语义分割)的性能的预处理步骤。我们使用近图的空中图像数据集证明了这种方法,该数据集以每个像素分辨率为5-7 cm的数百个城市区域。我们显示所提出的方法可提高所有预测类别的感知图像质量以及定量分割精度,与最先进的单个网络方法相比,在4倍和32X超级分辨率下,平均准确性提高了11.8%和108%。这项工作表明,共同考虑基于图像的和特定于任务的损失可以提高两者的性能,并在航空影像的语义意识超级分辨率方面提高最先进的作用。
This work addresses the problems of semantic segmentation and image super-resolution by jointly considering the performance of both in training a Generative Adversarial Network (GAN). We propose a novel architecture and domain-specific feature loss, allowing super-resolution to operate as a pre-processing step to increase the performance of downstream computer vision tasks, specifically semantic segmentation. We demonstrate this approach using Nearmap's aerial imagery dataset which covers hundreds of urban areas at 5-7 cm per pixel resolution. We show the proposed approach improves perceived image quality as well as quantitative segmentation accuracy across all prediction classes, yielding an average accuracy improvement of 11.8% and 108% at 4x and 32x super-resolution, compared with state-of-the art single-network methods. This work demonstrates that jointly considering image-based and task-specific losses can improve the performance of both, and advances the state-of-the-art in semantic-aware super-resolution of aerial imagery.