论文标题
现金形式:认知意识形状变压器用于纵向分析
CASHformer: Cognition Aware SHape Transformer for Longitudinal Analysis
论文作者
论文摘要
对皮层结构的时间变化进行建模对于更好地了解阿尔茨海默氏病(AD)的进展至关重要。鉴于它们灵活地适应了异质序列的长度,过去提出了基于网格的变压器体系结构,以预测跨时间的海马变形。但是,变压器的主要局限性之一是大量可训练的参数,这使得在小数据集上的应用程序非常具有挑战性。此外,当前方法不包括相关的非图像信息,这些信息可以帮助识别进展中与AD相关的模式。为此,我们介绍了CashFormer,这是一种基于变压器的框架,以模拟AD中的纵向形状轨迹。 CashFormer将预训练的变压器作为通用计算引擎的想法,通过在微调过程中冻结大多数层来跨越各种任务的通用计算发动机。相对于原始模型,这将参数的数量减少了90%以上,因此可以在小型数据集上应用大型模型而不会过度拟合。此外,CashFormer模型的认知能力下降以揭示时间序列中的AD萎缩模式。我们的结果表明,与先前提出的方法相比,现金形式的重建误差降低了73%。此外,随着缺失的纵向形状数据,检测患者发展为AD的准确性增加了3%。
Modeling temporal changes in subcortical structures is crucial for a better understanding of the progression of Alzheimer's disease (AD). Given their flexibility to adapt to heterogeneous sequence lengths, mesh-based transformer architectures have been proposed in the past for predicting hippocampus deformations across time. However, one of the main limitations of transformers is the large amount of trainable parameters, which makes the application on small datasets very challenging. In addition, current methods do not include relevant non-image information that can help to identify AD-related patterns in the progression. To this end, we introduce CASHformer, a transformer-based framework to model longitudinal shape trajectories in AD. CASHformer incorporates the idea of pre-trained transformers as universal compute engines that generalize across a wide range of tasks by freezing most layers during fine-tuning. This reduces the number of parameters by over 90% with respect to the original model and therefore enables the application of large models on small datasets without overfitting. In addition, CASHformer models cognitive decline to reveal AD atrophy patterns in the temporal sequence. Our results show that CASHformer reduces the reconstruction error by 73% compared to previously proposed methods. Moreover, the accuracy of detecting patients progressing to AD increases by 3% with imputing missing longitudinal shape data.