论文标题

MeAformer:元模态混合动力的多模式实体对齐变压器

MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid

论文作者

Chen, Zhuo, Chen, Jiaoyan, Zhang, Wen, Guo, Lingbing, Fang, Yin, Huang, Yufeng, Zhang, Yichi, Geng, Yuxia, Pan, Jeff Z., Song, Wenting, Chen, Huajun

论文摘要

多模式实体对准(MMEA)旨在发现其实体与相关图像相关联的不同知识图(kgs)的相同实体。但是,当前的MMEA算法依赖于KG级的模态融合策略,用于多模式实体表示,该策略忽略了不同实体的模态偏好的变化,从而损害了在模糊图像和关系等模态下对噪声的鲁棒性。本文介绍了MeeFormer,这是一种用于元模态混合动力的多模式实体对准变压器方法,该方法可以动态预测模态之间的相关系数,以实现更细粒度的实体级别的模态融合和对齐。实验结果表明,我们的模型不仅在多种培训方案中实现SOTA性能,包括受监督,无监督,迭代和低资源设置,而且还具有有限的参数,有效的运行时和可解释性。我们的代码可在https://github.com/zjukg/meaformer上找到。

Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源