论文标题

目标和任务特定的无源域自适应图像分段

Target and Task specific Source-Free Domain Adaptive Image Segmentation

论文作者

VS, Vibashan, Valanarasu, Jeya Maria Jose, Patel, Vishal M.

论文摘要

在推理过程中解决域转移问题对于医学成像至关重要,因为大多数基于深度学习的解决方案都遭受了它的影响。实际上,通过执行无监督的域适应(UDA)来解决域移动,在该域,该模型通过利用标记的源数据来适应未标记的目标域。在医疗情况下,数据带来了巨大的隐私问题,因此很难应用标准的UDA技术。因此,更接近的临床环境是无源的UDA(SFUDA),在适应过程中,我们可以访问源训练的模型,而不是源训练的模型。现有的SFUDA方法依赖于基于伪标签的自我训练技术来解决域转移。但是,由于结构域的移位,这些伪标签通常具有较高的熵,并以嘈杂的伪标签适应源模型会导致次优性能。为了克服这一局限性,我们提出了一种系统的两阶段方法,用于由目标特定适应,然后进行特定于任务的适应性。在特定于目标的适应性中,我们通过使用拟议的合奏熵损失和选择性投票策略来最大程度地减少高熵区,从而增强了伪标签的生成。在特定于任务的适应性中,我们使用学生教师框架利用增强的伪标记,以有效地了解目标域的细分。我们在7个不同的域移动中评估了我们提出的方法在2D眼底数据集和3D MRI量上,在这些域移动中,我们的性能优于现有的UDA和SFUDA方法,用于医疗图像分割。代码可在https://github.com/vibashan/tt-sfuda上找到。

Solving the domain shift problem during inference is essential in medical imaging, as most deep-learning based solutions suffer from it. In practice, domain shifts are tackled by performing Unsupervised Domain Adaptation (UDA), where a model is adapted to an unlabelled target domain by leveraging the labelled source data. In medical scenarios, the data comes with huge privacy concerns making it difficult to apply standard UDA techniques. Hence, a closer clinical setting is Source-Free UDA (SFUDA), where we have access to source-trained model but not the source data during adaptation. Existing SFUDA methods rely on pseudo-label based self-training techniques to address the domain shift. However, these pseudo-labels often have high entropy due to domain shift and adapting the source model with noisy pseudo-labels leads to sub-optimal performance. To overcome this limitation, we propose a systematic two-stage approach for SFUDA comprising of target-specific adaptation followed by task-specific adaptation. In target-specific adaptation, we enhance the pseudo-label generation by minimizing high entropy regions using the proposed ensemble entropy minimization loss and a selective voting strategy. In task-specific adaptation, we exploit the enhanced pseudo-labels using a student-teacher framework to effectively learn segmentation on the target domain. We evaluate our proposed method on 2D fundus datasets and 3D MRI volumes across 7 different domain shifts where we perform better than existing UDA and SFUDA methods for medical image segmentation. Code is available at https://github.com/Vibashan/tt-sfuda.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源