论文标题

学习以有限的注释进行细分:自制预处理,并在MRI中进行回归和对比损失

Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI

论文作者

Umapathy, Lavanya, Fu, Zhiyang, Philip, Rohit, Martin, Diego, Altbach, Maria, Bilgin, Ali

论文摘要

获取大型数据集的手动注释以进行监督的深度学习培训(DL)模型是具有挑战性的。与标记的数据集相比,大型未标记的数据集的可用性激发了自我监督预处理的使用,以初始化DL模型以进行后续分段任务。在这项工作中,我们考虑了驱动DL模型的两种预训练方法,以学习不同的表示:使用磁共振(MR)图像在两个下游分割应用中评估了预处理技术的效果:a)腹部T2加权MR图像中的肝分割,b)前列腺的T2加权MR图像中的前列腺分割。我们观察到,可以使用自学意识到的DL模型进行预审核,以相当的性能,而标记的数据集更少。此外,我们还观察到,使用基于对比的基于损失的预处理初始化DL模型的性能优于回归损失。

Obtaining manual annotations for large datasets for supervised training of deep learning (DL) models is challenging. The availability of large unlabeled datasets compared to labeled ones motivate the use of self-supervised pretraining to initialize DL models for subsequent segmentation tasks. In this work, we consider two pre-training approaches for driving a DL model to learn different representations using: a) regression loss that exploits spatial dependencies within an image and b) contrastive loss that exploits semantic similarity between pairs of images. The effect of pretraining techniques is evaluated in two downstream segmentation applications using Magnetic Resonance (MR) images: a) liver segmentation in abdominal T2-weighted MR images and b) prostate segmentation in T2-weighted MR images of the prostate. We observed that DL models pretrained using self-supervision can be finetuned for comparable performance with fewer labeled datasets. Additionally, we also observed that initializing the DL model using contrastive loss based pretraining performed better than the regression loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源