论文标题

蛤:使用生成语言模型的歧义性问题的选择性澄清

CLAM: Selective Clarification for Ambiguous Questions with Generative Language Models

论文作者

Kuhn, Lorenz, Gal, Yarin, Farquhar, Sebastian

论文摘要

用户经常提出对话系统模棱两可的问题,需要澄清。我们表明,当前的语言模型很少要求用户澄清歧义的问题,而是提供错误的答案。为了解决这个问题,我们介绍了Clam:一个框架,用于获取语言模型,以选择性地要求澄清含糊的用户问题。特别是,我们表明我们可以提示语言模型检测给定的问题是否模棱两可,生成适当的澄清问题以询问用户,并在收到澄清后给出最终答案。我们还表明,我们可以通过提供特权信息来模拟用户来模拟用户。这使我们能够自动评估多转弯澄清对话。最后,Clam显着提高了语言模型相对于SOTA的混合模棱两可和明确的问题的准确性。

Users often ask dialogue systems ambiguous questions that require clarification. We show that current language models rarely ask users to clarify ambiguous questions and instead provide incorrect answers. To address this, we introduce CLAM: a framework for getting language models to selectively ask for clarification about ambiguous user questions. In particular, we show that we can prompt language models to detect whether a given question is ambiguous, generate an appropriate clarifying question to ask the user, and give a final answer after receiving clarification. We also show that we can simulate users by providing language models with privileged information. This lets us automatically evaluate multi-turn clarification dialogues. Finally, CLAM significantly improves language models' accuracy on mixed ambiguous and unambiguous questions relative to SotA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源