论文标题
有信心的多教老师知识蒸馏
Confidence-Aware Multi-Teacher Knowledge Distillation
论文作者
论文摘要
最初引入知识蒸馏以利用单个教师模型的其他监督进行学生模型培训。为了提高学生的表现,一些最近的一些变体试图利用多个老师的多种知识来源。但是,现有的研究主要通过对多个教师预测进行平均或使用其他各种无标签策略组合来整合不同来源的知识,这可能会在存在低质量的教师预测的情况下误导学生。为了解决这个问题,我们提出了置信度了解的多教学知识蒸馏(CA-MKD),该知识借助地面标签可以适应每个教师预测的样本可靠性,这些教师预测接近一列表,分配了很大的权重。此外,CA-MKD还结合了中间层以稳定知识转移过程。广泛的实验表明,我们的CA-MKD始终优于所有教师建筑中最先进的方法。
Knowledge distillation is initially introduced to utilize additional supervision from a single teacher model for the student model training. To boost the student performance, some recent variants attempt to exploit diverse knowledge sources from multiple teachers. However, existing studies mainly integrate knowledge from diverse sources by averaging over multiple teacher predictions or combining them using other various label-free strategies, which may mislead student in the presence of low-quality teacher predictions. To tackle this problem, we propose Confidence-Aware Multi-teacher Knowledge Distillation (CA-MKD), which adaptively assigns sample-wise reliability for each teacher prediction with the help of ground-truth labels, with those teacher predictions close to one-hot labels assigned large weights. Besides, CA-MKD incorporates intermediate layers to stable the knowledge transfer process. Extensive experiments show that our CA-MKD consistently outperforms all compared state-of-the-art methods across various teacher-student architectures.