论文标题

多个实例神经图像变压器

Multiple Instance Neuroimage Transformer

论文作者

Singla, Ayush, Zhao, Qingyu, Do, Daniel K., Zhou, Yuyin, Pohl, Kilian M., Adeli, Ehsan

论文摘要

我们首次建议使用基于多个实例学习的无卷积变压器模型,称为多个实例神经图像变压器(Minit),以分类T1Weighted(T1W)MRIS。我们首先介绍了为神经图像采用的几种变压器模型。这些模型从输入量提取非重叠的3D块,并对其线性投影进行多头性自我注意。另一方面,销售将输入MRI的每个非重叠3D块视为其自己的实例,将其进一步分为非重叠的3D贴片,并在其上计算了多头自我注意力。作为概念验证,我们通过训练该模型的疗效,以识别两个公共数据集的T1W-MRI:青少年脑认知发展(ABCD)和青少年酒精和神经发育联盟(NCANDA)。博学的注意力图突出了有助于识别脑形态计量学性别差异的体素。该代码可在https://github.com/singlaayush/minit上找到。

For the first time, we propose using a multiple instance learning based convolution-free transformer model, called Multiple Instance Neuroimage Transformer (MINiT), for the classification of T1weighted (T1w) MRIs. We first present several variants of transformer models adopted for neuroimages. These models extract non-overlapping 3D blocks from the input volume and perform multi-headed self-attention on a sequence of their linear projections. MINiT, on the other hand, treats each of the non-overlapping 3D blocks of the input MRI as its own instance, splitting it further into non-overlapping 3D patches, on which multi-headed self-attention is computed. As a proof-of-concept, we evaluate the efficacy of our model by training it to identify sex from T1w-MRIs of two public datasets: Adolescent Brain Cognitive Development (ABCD) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA). The learned attention maps highlight voxels contributing to identifying sex differences in brain morphometry. The code is available at https://github.com/singlaayush/MINIT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源