论文标题
通过多域注意网络适应无监督的RGB至热领域
Unsupervised RGB-to-Thermal Domain Adaptation via Multi-Domain Attention Network
论文作者
论文摘要
这项工作通过使用多域注意力网络从RGB结构域转移知识,提供了一种无监督的热图像分类和语义分割的新方法。我们的方法不需要任何热注释或共同注册的RGB热对,使机器人能够在夜间和不利天气条件下执行视觉任务,而不会产生额外的数据标记和注册费用。当前的无监督域适应方法希望使跨域的全局图像或特征对齐。但是,当跨模式数据的域移动明显更大时,并非所有功能都可以传输。我们通过使用共享的骨干网络来解决这个问题,该网络促进概括,而特定于域的注意力通过参与域,不变且易于转移的特征来降低负转移。我们的方法的表现优于分类基准中最新的RGB至热适应方法,并且仅使用合成RGB图像成功地应用于热河场景分割。我们的代码可在https://github.com/ganlumomo/thermal-uda-prestion上公开提供。
This work presents a new method for unsupervised thermal image classification and semantic segmentation by transferring knowledge from the RGB domain using a multi-domain attention network. Our method does not require any thermal annotations or co-registered RGB-thermal pairs, enabling robots to perform visual tasks at night and in adverse weather conditions without incurring additional costs of data labeling and registration. Current unsupervised domain adaptation methods look to align global images or features across domains. However, when the domain shift is significantly larger for cross-modal data, not all features can be transferred. We solve this problem by using a shared backbone network that promotes generalization, and domain-specific attention that reduces negative transfer by attending to domain-invariant and easily-transferable features. Our approach outperforms the state-of-the-art RGB-to-thermal adaptation method in classification benchmarks, and is successfully applied to thermal river scene segmentation using only synthetic RGB images. Our code is made publicly available at https://github.com/ganlumomo/thermal-uda-attention.