论文标题
多渠道专注图卷积网络具有情感融合用于多模式分析
Multi-channel Attentive Graph Convolutional Network With Sentiment Fusion For Multimodal Sentiment Analysis
论文作者
论文摘要
如今,随着社交媒体平台上多模式评论的爆炸性增长,多模式的情感分析最近因与这些社交媒体帖子的高度相关而越来越受欢迎。尽管大多数以前的研究都设计了各种融合框架来学习多种方式的交互式表示,但它们未能将情感知识纳入模式间学习。本文提出了一个多通道的专注图卷积网络(MAGCN),由两个主要组成部分组成:跨模式交互式学习和情感特征融合。为了进行跨模式的交互式学习,我们利用了自我发场机制与密度连接的图形卷积网络相结合以学习模式间动力学。对于情感特征融合,我们利用多头自我注意力将情感知识合并为模式间特征表示。广泛的实验是在三个广泛使用的数据集上进行的。实验结果表明,与几种最新方法相比,所提出的模型在准确性和F1得分上实现了竞争性能。
Nowadays, with the explosive growth of multimodal reviews on social media platforms, multimodal sentiment analysis has recently gained popularity because of its high relevance to these social media posts. Although most previous studies design various fusion frameworks for learning an interactive representation of multiple modalities, they fail to incorporate sentimental knowledge into inter-modality learning. This paper proposes a Multi-channel Attentive Graph Convolutional Network (MAGCN), consisting of two main components: cross-modality interactive learning and sentimental feature fusion. For cross-modality interactive learning, we exploit the self-attention mechanism combined with densely connected graph convolutional networks to learn inter-modality dynamics. For sentimental feature fusion, we utilize multi-head self-attention to merge sentimental knowledge into inter-modality feature representations. Extensive experiments are conducted on three widely-used datasets. The experimental results demonstrate that the proposed model achieves competitive performance on accuracy and F1 scores compared to several state-of-the-art approaches.