论文标题
“等等,我还在说话!”使用想象 - 然后 - 然后 - 然后 - 然后衡量模型来预测对话互动行为
"Wait, I'm Still Talking!" Predicting the Dialogue Interaction Behavior Using Imagine-Then-Arbitrate Model
论文作者
论文摘要
产生自然而准确的反应,例如人类,是智能对话代理的最终目标。到目前为止,过去的大多数工作都集中在根据当前查询及其上下文选择或产生一个相关且流利的响应。这些模型在一个一对一的环境中起作用,每回合对一个话语做出一个回应。但是,在真正的人类对话中,人类经常会顺序发送几条短消息以进行可读性,而不是一回合。因此,消息不会以明确的结尾信号结尾,这对于代理决定何时回复至关重要。因此,智能对话代理的第一步不是回复,而是决定目前是否应该回复。为了解决这个问题,在本文中,我们提出了一个新颖的想象 - 然后是叛逆的(ITA)神经对话模型,以帮助代理商决定是否等待或直接做出反应。我们的方法具有两个想象器模块和一个仲裁器模块。两个想象者将分别学习代理商和用户的口语风格,将可能的话语作为仲裁员的输入,并结合对话历史记录。仲裁员决定是等待还是直接对用户做出回应。为了验证我们方法的性能和有效性,我们准备了两个对话数据集,并将我们的方法与几种流行模型进行了比较。实验结果表明,我们的模型在解决最终预测问题上表现良好,并且优于基线模型。
Producing natural and accurate responses like human beings is the ultimate goal of intelligent dialogue agents. So far, most of the past works concentrate on selecting or generating one pertinent and fluent response according to current query and its context. These models work on a one-to-one environment, making one response to one utterance each round. However, in real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn. Thus messages will not end with an explicit ending signal, which is crucial for agents to decide when to reply. So the first step for an intelligent dialogue agent is not replying but deciding if it should reply at the moment. To address this issue, in this paper, we propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly. Our method has two imaginator modules and an arbitrator module. The two imaginators will learn the agent's and user's speaking style respectively, generate possible utterances as the input of the arbitrator, combining with dialogue history. And the arbitrator decides whether to wait or to make a response to the user directly. To verify the performance and effectiveness of our method, we prepared two dialogue datasets and compared our approach with several popular models. Experimental results show that our model performs well on addressing ending prediction issue and outperforms baseline models.