论文标题

Metalr:在医学成像中转移学习的学习率的元调整

MetaLR: Meta-tuning of Learning Rates for Transfer Learning in Medical Imaging

论文作者

Chen, Yixiong, Liu, Li, Li, Jingxian, Jiang, Hua, Ding, Chris, Zhou, Zongwei

论文摘要

在医学图像分析中,转移学习是深层神经网络(DNN)的强大方法,可以很好地概括有限的医疗数据。先前的努力集中在肺部超声,胸部X射线和肝脏CT等领域上开发预训练算法。但是,我们发现模型微调在使医学知识适应目标任务方面也起着至关重要的作用。常见的微调方法是手动选择可转移的层(例如,最后几层)要更新,这是付费量的。在这项工作中,我们提出了一个名为Metalr的基于元学习的LR调谐器,以根据其跨域的转移性自动将不同的层自动加入下游任务。 Metalr以在线方式学习适合不同层的LRS,以防止高度可转移的层忘记其医疗表现能力,并驾驶不太转移的层以积极适应新领域。关于各种医疗应用的广泛实验表明,Metalr的表现优于先前的最先进(SOTA)微调策略。代码已发布。

In medical image analysis, transfer learning is a powerful method for deep neural networks (DNNs) to generalize well on limited medical data. Prior efforts have focused on developing pre-training algorithms on domains such as lung ultrasound, chest X-ray, and liver CT to bridge domain gaps. However, we find that model fine-tuning also plays a crucial role in adapting medical knowledge to target tasks. The common fine-tuning method is manually picking transferable layers (e.g., the last few layers) to update, which is labor-expensive. In this work, we propose a meta-learning-based LR tuner, named MetaLR, to make different layers automatically co-adapt to downstream tasks based on their transferabilities across domains. MetaLR learns appropriate LRs for different layers in an online manner, preventing highly transferable layers from forgetting their medical representation abilities and driving less transferable layers to adapt actively to new domains. Extensive experiments on various medical applications show that MetaLR outperforms previous state-of-the-art (SOTA) fine-tuning strategies. Codes are released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源