论文标题
通用2D医学图像分类的干扰者感知的神经元内在学习
Distractor-Aware Neuron Intrinsic Learning for Generic 2D Medical Image Classifications
论文作者
论文摘要
医疗图像分析有益于计算机辅助诊断(CADX)。一种基本的分析方法是对皮肤病变诊断,糖尿病性视网膜病变分级和癌症分类的医学图像的分类。当学习这些歧视性分类器时,我们观察到卷积神经网络(CNN)容易受到干扰因素干扰的影响。这是由于来自不同类别(即小阶层距离)的样本外观相似。现有尝试通过经验估计其对分类器的潜在影响,从输入图像中选择干扰物。这些干扰因素如何影响CNN分类的本质尚不清楚。在本文中,我们通过提出神经元内在的学习方法来探索CNN特征空间的干扰因素。我们制定了一种新颖的干扰感损失,鼓励在特征空间中原始图像及其干扰器之间的距离很大。新颖的损失与原始分类损失相结合,以通过反向传播更新网络参数。神经元的内在学习首先探索了对深层分类器至关重要的干扰因素,然后使用它们固有地稳健地对CNN进行鲁棒性。关于医学图像基准数据集的广泛实验表明,所提出的方法对最先进的方法表现出色。
Medical image analysis benefits Computer Aided Diagnosis (CADx). A fundamental analyzing approach is the classification of medical images, which serves for skin lesion diagnosis, diabetic retinopathy grading, and cancer classification on histological images. When learning these discriminative classifiers, we observe that the convolutional neural networks (CNNs) are vulnerable to distractor interference. This is due to the similar sample appearances from different categories (i.e., small inter-class distance). Existing attempts select distractors from input images by empirically estimating their potential effects to the classifier. The essences of how these distractors affect CNN classification are not known. In this paper, we explore distractors from the CNN feature space via proposing a neuron intrinsic learning method. We formulate a novel distractor-aware loss that encourages large distance between the original image and its distractor in the feature space. The novel loss is combined with the original classification loss to update network parameters by back-propagation. Neuron intrinsic learning first explores distractors crucial to the deep classifier and then uses them to robustify CNN inherently. Extensive experiments on medical image benchmark datasets indicate that the proposed method performs favorably against the state-of-the-art approaches.