论文标题

关于知识图的逻辑推理的神经方法

Neural Methods for Logical Reasoning Over Knowledge Graphs

论文作者

Amayuelas, Alfonso, Zhang, Shuai, Rao, Susie Xi, Zhang, Ce

论文摘要

推理是计算机的基本问题,并且在人工智能中深入研究。在本文中,我们专门致力于回答知识图(KGS)的多跳逻辑查询。这是一项复杂的任务,因为在实际情况下,这些图表往往很大且不完整。大多数以前的作品都无法创建模型,这些模型接受完整的一阶逻辑(fol)查询,其中包括负查询,并且只能处理有限的查询结构集。此外,大多数方法都呈现只能执行其逻辑操作的逻辑运算符。我们介绍了一组模型,这些模型使用神经网络来创建一个点向量嵌入以回答查询。神经网络的多功能性允许该框架处理连词($ \ wedge $),脱节($ \ vee $)和否定($ \ neg $)运算符的框架查询。我们通过对众所周知的基准数据集进行了广泛的实验,通过实验证明了模型的性能。除了拥有更多多功能操作员外,模型还获得了比最佳性能状态的10 \%相对增加,而基于单点矢量嵌入的原始方法比原始方法超过30 \%。

Reasoning is a fundamental problem for computers and deeply studied in Artificial Intelligence. In this paper, we specifically focus on answering multi-hop logical queries on Knowledge Graphs (KGs). This is a complicated task because, in real-world scenarios, the graphs tend to be large and incomplete. Most previous works have been unable to create models that accept full First-Order Logical (FOL) queries, which include negative queries, and have only been able to process a limited set of query structures. Additionally, most methods present logic operators that can only perform the logical operation they are made for. We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries. The versatility of neural networks allows the framework to handle FOL queries with Conjunction ($\wedge$), Disjunction ($\vee$) and Negation ($\neg$) operators. We demonstrate experimentally the performance of our model through extensive experimentation on well-known benchmarking datasets. Besides having more versatile operators, the models achieve a 10\% relative increase over the best performing state of the art and more than 30\% over the original method based on single-point vector embeddings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源