论文标题

从知识图为文本生成的全球和本地节点上下文建模

Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs

论文作者

Ribeiro, Leonardo F. R., Zhang, Yue, Gardent, Claire, Gurevych, Iryna

论文摘要

最新的图形到文本模型使用全局或本地聚合从基于图的数据中生成文本,以学习节点表示。全局节点编码允许在两个遥远的节点之间进行明确的通信,从而忽略了图形拓扑,因为所有节点都是直接连接的。相比之下,局部节点编码的局部节点考虑了捕获图形结构的邻居节点之间的关系,但它可能无法捕获长期关系。在这项工作中,我们收集了两个编码策略,提出了新的神经模型,该模型编码了结合全局和局部节点上下文的输入图,以学习更好的上下文化节点嵌入。在我们的实验中,我们证明了我们的方法可在议程数据集上达到18.01的两个图形数据集的显着改进,而在WebNLG数据集上的BLEU分数为63.69,用于可见类别,分别超过3.7和3.1点,胜过了先进的先进模型。

Recent graph-to-text models generate text from graph-based data using either global or local aggregation to learn node representations. Global node encoding allows explicit communication between two distant nodes, thereby neglecting graph topology as all nodes are directly connected. In contrast, local node encoding considers the relations between neighbor nodes capturing the graph structure, but it can fail to capture long-range relations. In this work, we gather both encoding strategies, proposing novel neural models which encode an input graph combining both global and local node contexts, in order to learn better contextualized node embeddings. In our experiments, we demonstrate that our approaches lead to significant improvements on two graph-to-text datasets achieving BLEU scores of 18.01 on AGENDA dataset, and 63.69 on the WebNLG dataset for seen categories, outperforming state-of-the-art models by 3.7 and 3.1 points, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源