论文标题

可解释的监督领域适应

Explainable Supervised Domain Adaptation

论文作者

Kamakshi, Vidhya, Krishnan, Narayanan C

论文摘要

领域适应技术为深度学习的成功做出了贡献。从辅助源域中利用知识来学习标记的数据筛选目标域是针对域适应的基础。尽管这些技术导致精度提高,但适应过程,尤其是从源域中利用的知识,尚不清楚。本文提出了可以通过设计监督域的适应框架-XSDA -NET解释的。我们将基于案例的推理机制集成到XSDA-NET中,以解释源和目标训练图像中相似区域的测试实例的预测。我们从经验上通过策划了众所周知的基于部分的解释性的数据集上的域适应设置,从而证明了所提出的框架的实用性。

Domain adaptation techniques have contributed to the success of deep learning. Leveraging knowledge from an auxiliary source domain for learning in labeled data-scarce target domain is fundamental to domain adaptation. While these techniques result in increasing accuracy, the adaptation process, particularly the knowledge leveraged from the source domain, remains unclear. This paper proposes an explainable by design supervised domain adaptation framework - XSDA-Net. We integrate a case-based reasoning mechanism into the XSDA-Net to explain the prediction of a test instance in terms of similar-looking regions in the source and target train images. We empirically demonstrate the utility of the proposed framework by curating the domain adaptation settings on datasets popularly known to exhibit part-based explainability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源