论文标题
与语言模型中的复杂问题中推断隐性关系
Inferring Implicit Relations in Complex Questions with Language Models
论文作者
论文摘要
现代语言理解系统的一个巨大挑战是能够回答隐式推理问题的能力,在文本中未在文本中明确提及回答问题所需的推理步骤。在这项工作中,我们调查了为什么当前模型通过将推理步骤的推理与执行情况解耦,为何与隐式推理问题回答(QA)任务斗争。我们定义了一个隐性关系推理的新任务,并构建一个基准,隐含关系,在给出一个问题的情况下,模型应输出概念关系对列表,其中关系描述了回答问题所需的隐含推理步骤。使用隐式关联,我们评估了GPT-3家族的模型,并发现这些模型在隐含的推理质量质量保证任务上挣扎,但它们通常会成功推断隐性关系。这表明,隐性推理问题中的挑战并不是源于仅计划推理策略的需要,而是在同时取回相关信息的同时进行和推理。
A prominent challenge for modern language understanding systems is the ability to answer implicit reasoning questions, where the required reasoning steps for answering the question are not mentioned in the text explicitly. In this work, we investigate why current models struggle with implicit reasoning question answering (QA) tasks, by decoupling inference of reasoning steps from their execution. We define a new task of implicit relation inference and construct a benchmark, IMPLICITRELATIONS, where given a question, a model should output a list of concept-relation pairs, where the relations describe the implicit reasoning steps required for answering the question. Using IMPLICITRELATIONS, we evaluate models from the GPT-3 family and find that, while these models struggle on the implicit reasoning QA task, they often succeed at inferring implicit relations. This suggests that the challenge in implicit reasoning questions does not stem from the need to plan a reasoning strategy alone, but to do it while also retrieving and reasoning over relevant information.