论文标题
学习跨多个站点的连续分割的同步网络记忆性和概括性
Learning towards Synchronous Network Memorizability and Generalizability for Continual Segmentation across Multiple Sites
论文作者
论文摘要
在临床实践中,由于存储成本和隐私限制,通常需要一个分割网络在多个站点而不是合并集的顺序数据流上不断学习。但是,在持续的学习过程中,现有方法通常在以前的网站上的网络记忆中受到限制,或者在看不见的站点上的通用性。本文旨在解决同步记忆和概括性(SMG)的挑战性问题,并使用新颖的SMG学习框架同时提高以前和看不见的地点的性能。首先,我们提出了同步梯度对准(SGA)目标,该目标不仅通过对先前站点(称为重播缓冲液)的小型示例进行协调优化来促进网络的记忆力,还可以通过在模拟域移位下促进站点不变来提高概括性。其次,为了简化SGA目标的优化,我们设计了一种双META算法,该算法将SGA目标近似为双元目标,以进行优化,而无需昂贵的计算开销。第三,为了有效的排练,我们全面考虑了重播缓冲区,以考虑额外的地点多样性以降低冗余。从六个机构中依次获得的前列腺MRI数据实验表明,我们的方法可以同时获得更高的记忆性和对最新方法的可推广性。代码可从https://github.com/jingyzhang/smg-learning获得。
In clinical practice, a segmentation network is often required to continually learn on a sequential data stream from multiple sites rather than a consolidated set, due to the storage cost and privacy restriction. However, during the continual learning process, existing methods are usually restricted in either network memorizability on previous sites or generalizability on unseen sites. This paper aims to tackle the challenging problem of Synchronous Memorizability and Generalizability (SMG) and to simultaneously improve performance on both previous and unseen sites, with a novel proposed SMG-learning framework. First, we propose a Synchronous Gradient Alignment (SGA) objective, which not only promotes the network memorizability by enforcing coordinated optimization for a small exemplar set from previous sites (called replay buffer), but also enhances the generalizability by facilitating site-invariance under simulated domain shift. Second, to simplify the optimization of SGA objective, we design a Dual-Meta algorithm that approximates the SGA objective as dual meta-objectives for optimization without expensive computation overhead. Third, for efficient rehearsal, we configure the replay buffer comprehensively considering additional inter-site diversity to reduce redundancy. Experiments on prostate MRI data sequentially acquired from six institutes demonstrate that our method can simultaneously achieve higher memorizability and generalizability over state-of-the-art methods. Code is available at https://github.com/jingyzhang/SMG-Learning.