论文标题
通过正确实例化的无监督解释产生
Unsupervised Explanation Generation via Correct Instantiations
论文作者
论文摘要
尽管大型的预训练的语言模型(PLM)表明了他们在解决歧视任务方面的出色技能,但与人类在与解释相关的任务中相比,仍然存在很大的差距。其中,解释陈述是错误的原因(例如,反对常识)是极具挑战性的。主要的困难是找到冲突点,声明与我们的现实世界相抵触。本文提出了霓虹灯,霓虹灯是一个两相,无监督的解释生成框架。霓虹灯首先生成了校正的声明实例(I阶段),然后使用它们提示大型PLM,以找到冲突点并完成解释(II阶段)。我们对两个标准解释基准(即Comve和e-Snli)进行了广泛的实验。根据自动评估和人类评估,即使对于具有人类宣传的实例化的人来说,霓虹灯的表现也优于基准。除了解释负面预测外,我们还进一步证明了霓虹灯在推广到不同方案时仍然有效。
While large pre-trained language models (PLM) have shown their great skills at solving discriminative tasks, a significant gap remains when compared with humans for explanation-related tasks. Among them, explaining the reason why a statement is wrong (e.g., against commonsense) is incredibly challenging. The major difficulty is finding the conflict point, where the statement contradicts our real world. This paper proposes Neon, a two-phrase, unsupervised explanation generation framework. Neon first generates corrected instantiations of the statement (phase I), then uses them to prompt large PLMs to find the conflict point and complete the explanation (phase II). We conduct extensive experiments on two standard explanation benchmarks, i.e., ComVE and e-SNLI. According to both automatic and human evaluations, Neon outperforms baselines, even for those with human-annotated instantiations. In addition to explaining a negative prediction, we further demonstrate that Neon remains effective when generalizing to different scenarios.