论文标题
证据融合与上下文折现的多模式医学图像细分
Evidence fusion with contextual discounting for multi-modality medical image segmentation
论文作者
论文摘要
由于信息源通常不完美,因此有必要考虑其在多源信息融合任务中的可靠性。在本文中,我们提出了一个新的深层框架,使我们能够使用Dempster-Shafer理论的形式合并多MR图像分割结果,同时考虑到相对于不同类别的不同模态的可靠性。该框架由编码器折线特征提取模块,一个证据分割模块组成,该模块为每种模式计算每个体素的信念函数,以及多模式证据证据融合模块,该模块将折现率的向量分配给每个模态证据,并使用Dempster的规则将折现证据结合在一起。整个框架是通过根据折扣骰子指数最小化新的损失功能来培训的,以提高细分精度和可靠性。该方法在1251例脑肿瘤患者的Brats 2021数据库上进行了评估。定量和定性的结果表明,我们的方法表现优于艺术的状态,并在深层神经网络中实现了合并多信息的有效新想法。
As information sources are usually imperfect, it is necessary to take into account their reliability in multi-source information fusion tasks. In this paper, we propose a new deep framework allowing us to merge multi-MR image segmentation results using the formalism of Dempster-Shafer theory while taking into account the reliability of different modalities relative to different classes. The framework is composed of an encoder-decoder feature extraction module, an evidential segmentation module that computes a belief function at each voxel for each modality, and a multi-modality evidence fusion module, which assigns a vector of discount rates to each modality evidence and combines the discounted evidence using Dempster's rule. The whole framework is trained by minimizing a new loss function based on a discounted Dice index to increase segmentation accuracy and reliability. The method was evaluated on the BraTs 2021 database of 1251 patients with brain tumors. Quantitative and qualitative results show that our method outperforms the state of the art, and implements an effective new idea for merging multi-information within deep neural networks.