论文标题
半监督学习的双重加权方法
Double-Uncertainty Weighted Method for Semi-supervised Learning
论文作者
论文摘要
尽管最近的深度学习最近达到了高级表现,但在医学成像领域中仍然是一项具有挑战性的任务,因为获得可靠的标记培训数据是耗时且昂贵的。在本文中,我们提出了一种基于教师模型的半监督分割的双重加权方法。教师模型通过惩罚对标记和未标记数据的不一致的预测来为学生模型提供指导。我们使用贝叶斯深度学习来训练教师模型以获得双重不确定性,即细分不确定性和特征不确定性。它是第一个将细分不确定性估计的扩展到特征不确定性的,这揭示了在频道之间捕获信息的能力。可学习的不确定性一致性损失是为预测和不确定性之间的互动方式设计的。由于没有基于监督的基础真相,它仍然可以激励更准确的教师的预测,并促进模型减少不确定的估计。此外,我们提出的双重不确定性对平衡和协调监督和无监督的培训过程的每种不一致罚款都具有重视。我们通过定性和定量分析来验证提出的特征不确定性和损失函数。实验结果表明,我们的方法优于两个公共医疗数据集上的基于不确定性的半监督方法。
Though deep learning has achieved advanced performance recently, it remains a challenging task in the field of medical imaging, as obtaining reliable labeled training data is time-consuming and expensive. In this paper, we propose a double-uncertainty weighted method for semi-supervised segmentation based on the teacher-student model. The teacher model provides guidance for the student model by penalizing their inconsistent prediction on both labeled and unlabeled data. We train the teacher model using Bayesian deep learning to obtain double-uncertainty, i.e. segmentation uncertainty and feature uncertainty. It is the first to extend segmentation uncertainty estimation to feature uncertainty, which reveals the capability to capture information among channels. A learnable uncertainty consistency loss is designed for the unsupervised learning process in an interactive manner between prediction and uncertainty. With no ground-truth for supervision, it can still incentivize more accurate teacher's predictions and facilitate the model to reduce uncertain estimations. Furthermore, our proposed double-uncertainty serves as a weight on each inconsistency penalty to balance and harmonize supervised and unsupervised training processes. We validate the proposed feature uncertainty and loss function through qualitative and quantitative analyses. Experimental results show that our method outperforms the state-of-the-art uncertainty-based semi-supervised methods on two public medical datasets.