论文标题

使用未标记的数据和知识蒸馏对糖尿病性视网膜病的分类

Classification of Diabetic Retinopathy Using Unlabeled Data and Knowledge Distillation

论文作者

Abbasi, Sajjad, Hajabdollahi, Mohsen, Khadivi, Pejman, Karimi, Nader, Roshandel, Roshanak, Shirani, Shahram, Samavi, Shadrokh

论文摘要

知识蒸馏允许将知识从预训练的模型转移到另一个模型。但是,它受到限制,与两个模型相关的限制必须在架构上相似。知识蒸馏通过将复杂模型推广到更轻的模型来解决与转移学习相关的一些缺点。但是,知识的某些部分可能无法通过足够的知识蒸馏来提炼。在本文中,提出了一种使用转移学习的新知识蒸馏方法。所提出的方法将模型的整个知识转移到了新的较小方法中。为此,以无标记的方式使用了未标记的数据,以将最大知识量转移到新的Slimmer模型中。所提出的方法在医学图像分析中可能是有益的,其中标记的数据通常很少。在两个公开可用数据集上诊断糖尿病性视网膜病的图像分类中评估了所提出的方法,包括梅赛德尔和eyepacs。仿真结果表明,该方法有效地将知识从复杂模型转移到更轻的模型。此外,实验结果表明,使用未标记的数据和知识蒸馏可显着提高不同小型模型的性能。

Knowledge distillation allows transferring knowledge from a pre-trained model to another. However, it suffers from limitations, and constraints related to the two models need to be architecturally similar. Knowledge distillation addresses some of the shortcomings associated with transfer learning by generalizing a complex model to a lighter model. However, some parts of the knowledge may not be distilled by knowledge distillation sufficiently. In this paper, a novel knowledge distillation approach using transfer learning is proposed. The proposed method transfers the entire knowledge of a model to a new smaller one. To accomplish this, unlabeled data are used in an unsupervised manner to transfer the maximum amount of knowledge to the new slimmer model. The proposed method can be beneficial in medical image analysis, where labeled data are typically scarce. The proposed approach is evaluated in the context of classification of images for diagnosing Diabetic Retinopathy on two publicly available datasets, including Messidor and EyePACS. Simulation results demonstrate that the approach is effective in transferring knowledge from a complex model to a lighter one. Furthermore, experimental results illustrate that the performance of different small models is improved significantly using unlabeled data and knowledge distillation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源