论文标题

Crossmoda 2021挑战:前庭型雪花瘤和耳蜗分割的跨模式域适应技术的基准

CrossMoDA 2021 challenge: Benchmark of Cross-Modality Domain Adaptation techniques for Vestibular Schwannoma and Cochlea Segmentation

论文作者

Dorent, Reuben, Kujawa, Aaron, Ivory, Marina, Bakas, Spyridon, Rieke, Nicola, Joutard, Samuel, Glocker, Ben, Cardoso, Jorge, Modat, Marc, Batmanghelich, Kayhan, Belkov, Arseniy, Calisto, Maria Baldeon, Choi, Jae Won, Dawant, Benoit M., Dong, Hexin, Escalera, Sergio, Fan, Yubo, Hansen, Lasse, Heinrich, Mattias P., Joshi, Smriti, Kashtanova, Victoriya, Kim, Hyeon Gyu, Kondo, Satoshi, Kruse, Christian N., Lai-Yuen, Susana K., Li, Hao, Liu, Han, Ly, Buntheng, Oguz, Ipek, Shin, Hyungseob, Shirokikh, Boris, Su, Zixian, Wang, Guotai, Wu, Jianghao, Xu, Yanwu, Yao, Kai, Zhang, Li, Ourselin, Sebastien, Shapey, Jonathan, Vercauteren, Tom

论文摘要

域适应(DA)最近对医学成像社区产生了强烈的兴趣。尽管已经提出了各种DA技术进行图像分割,但这些技术中的大多数已在私人数据集或小型公开数据集上进行了验证。此外,这些数据集主要解决了单层问题。为了应对这些限制,与第24届国际医学图像计算和计算机辅助干预措施(MICCAI 2021)一起组织了跨模式适应(Crossmoda)挑战。 Crossmoda是第一个大型且多级的基准,用于无监督的交叉模式DA。挑战的目标是分段前庭造型瘤的随访和治疗计划(VS)的两个关键大脑结构:VS和耳蜗。目前,使用对比增强T1(CET1)MRI进行VS患者的诊断和监测。但是,对使用非对比度序列(例如高分辨率T2(HRT2)MRI)越来越兴趣。因此,我们创建了一个无监督的交叉模式分割基准。该训练集提供了带注释的CET1(n = 105)和未划分的未经许可的HRT2(n = 105)。目的是在测试集中提供的HRT2上自动执行单侧VS和双侧耳蜗(n = 137)。共有16个团队为评估阶段提交了算法。表现最好的球队达到的表现水平惊人(最佳中位数骰子 - vs:88.4%; Cochleas:85.7%),接近完全监督(中位数骰子-vs -vs:92.5%; Cochleas:Cochleas:87.7%)。所有最佳表现的方法都使用图像到图像翻译方法将源域图像转换为伪目标域图像。然后,使用这些生成的图像和为源图像提供的手动注释对分割网络进行了训练。

Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality DA. The challenge's goal is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are performed using contrast-enhanced T1 (ceT1) MRI. However, there is growing interest in using non-contrast sequences such as high-resolution T2 (hrT2) MRI. Therefore, we created an unsupervised cross-modality segmentation benchmark. The training set provides annotated ceT1 (N=105) and unpaired non-annotated hrT2 (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 as provided in the testing set (N=137). A total of 16 teams submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice - VS:88.4%; Cochleas:85.7%) and close to full supervision (median Dice - VS:92.5%; Cochleas:87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源