论文标题
使用Inter Inter和Feature注意网络解释谣言检测
Explainable Rumor Detection using Inter and Intra-feature Attention Networks
论文作者
论文摘要
随着社交媒体无处不在,该媒体的信息消耗也有所增加。但是,随着这种增加而出现的严重问题之一是传播谣言。因此,谣言识别是一项非常关键的任务,对经济,民主以及公共卫生和安全产生了重大影响。在本文中,我们设计了一种可解释的架构,该架构使用潜在的和手工制作的功能来解决社交媒体中自动检测谣言的问题,并可以根据需要扩展到尽可能多的新功能。这种方法将使最终用户不仅可以确定社交媒体上的信息是否真实,还可以解释为什么该算法得出结论。使用注意机制,我们能够解释这些特征中每一个的相对重要性以及特征类本身的相对重要性。这种方法的优点是,在架构可以使用时,体系结构可以扩展到更多的手工特征,并进行广泛的测试以确定最终决定中这些功能的相对影响。对流行数据集进行了广泛的实验,并针对11种当代算法进行基准测试表明,我们的方法在F得分和准确性方面的表现明显更好,同时也可以解释。
With social media becoming ubiquitous, information consumption from this media has also increased. However, one of the serious problems that have emerged with this increase, is the propagation of rumors. Therefore, rumor identification is a very critical task with significant implications to economy, democracy as well as public health and safety. We tackle the problem of automated detection of rumors in social media in this paper by designing a modular explainable architecture that uses both latent and handcrafted features and can be expanded to as many new classes of features as desired. This approach will allow the end user to not only determine whether the piece of information on the social media is real of a rumor, but also give explanations on why the algorithm arrived at its conclusion. Using attention mechanisms, we are able to interpret the relative importance of each of these features as well as the relative importance of the feature classes themselves. The advantage of this approach is that the architecture is expandable to more handcrafted features as they become available and also to conduct extensive testing to determine the relative influences of these features in the final decision. Extensive experimentation on popular datasets and benchmarking against eleven contemporary algorithms, show that our approach performs significantly better in terms of F-score and accuracy while also being interpretable.