论文标题
用于跨域的转导多头模型
A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning
论文作者
论文摘要
在本文中,我们提出了一种新方法,即跨型多头学习(TMHFS),以应对跨域少数学习(CD-FSL)挑战。 TMHFS方法通过引入一个新的预测头(即基于语义信息的实例全局分类网络),扩展了元信仰转导(MCT)和密集的特征匹配网络(DFMN)方法[2]。我们在源域中同时训练具有多个头部的嵌入网络,即MCT损失,DFMN损失和语义分类器损失。对于目标域中的几次学习,我们首先在嵌入式网络上进行微调,仅使用语义全局分类器和支持实例进行微调,然后使用MCT零件来预测具有微型嵌入式网络的查询设置的标签。此外,我们在微调和测试阶段进一步利用数据增强技术以提高预测性能。实验结果表明,所提出的方法在四个不同的目标域上大大优于强基线,微调。
In this paper, we present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning (CD-FSL) challenge. The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method [2] by introducing a new prediction head, i.e, an instance-wise global classification network based on semantic information, after the common feature embedding network. We train the embedding network with the multiple heads, i.e,, the MCT loss, the DFMN loss and the semantic classifier loss, simultaneously in the source domain. For the few-shot learning in the target domain, we first perform fine-tuning on the embedding network with only the semantic global classifier and the support instances, and then use the MCT part to predict labels of the query set with the fine-tuned embedding network. Moreover, we further exploit data augmentation techniques during the fine-tuning and test stages to improve the prediction performance. The experimental results demonstrate that the proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.