论文标题
通过缺失模态综合和模态级别的注意力融合,多模式脑肿瘤分割
Multi-modal Brain Tumor Segmentation via Missing Modality Synthesis and Modality-level Attention Fusion
论文作者
论文摘要
多模式磁共振(MR)成像为诊断和分析脑胶质瘤提供了巨大的潜力。在临床情况下,可以在单个扫描过程中同时获得常见的MR序列,例如T1,T2和FLAIR。但是,获取对比度增强的方式(例如T1CE)需要额外的时间,成本和对比剂的注入。因此,开发一种合成不可用模式的方法在临床上有意义,这也可以用作下游任务的其他输入(例如,脑肿瘤分割)以增强性能。在这项工作中,我们提出了一个名为模式级注意融合网络(MAF-NET)的端到端框架,其中我们对斑块进行了斑块的对比学习,以提取多模式潜在特征,并动态分配注意力重量以融合不同的方式。通过对BRATS2020的广泛实验,我们提出的MAF-NET可产生优越的T1CE合成性能(SSIM为0.8879,PSNR为22.78)和准确的脑肿瘤分割(平均骰子得分为67.9%,41.8%,41.8%,41.8%和88.0%,对Tumor和Tumor tamor tumor tamor tumor tamore tamore tamore tamore tamore tamore tamore tamore tamore contect。
Multi-modal magnetic resonance (MR) imaging provides great potential for diagnosing and analyzing brain gliomas. In clinical scenarios, common MR sequences such as T1, T2 and FLAIR can be obtained simultaneously in a single scanning process. However, acquiring contrast enhanced modalities such as T1ce requires additional time, cost, and injection of contrast agent. As such, it is clinically meaningful to develop a method to synthesize unavailable modalities which can also be used as additional inputs to downstream tasks (e.g., brain tumor segmentation) for performance enhancing. In this work, we propose an end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net), wherein we innovatively conduct patchwise contrastive learning for extracting multi-modal latent features and dynamically assigning attention weights to fuse different modalities. Through extensive experiments on BraTS2020, our proposed MAF-Net is found to yield superior T1ce synthesis performance (SSIM of 0.8879 and PSNR of 22.78) and accurate brain tumor segmentation (mean Dice scores of 67.9%, 41.8% and 88.0% on segmenting the tumor core, enhancing tumor and whole tumor).