论文标题

通过对抗训练,在以任务为导向的对话中提高语言的自然性

Boosting Naturalness of Language in Task-oriented Dialogues via Adversarial Training

论文作者

Zhu, Chenguang

论文摘要

以任务为导向的对话系统中的自然语言生成(NLG)模块会产生面向用户的话语,以传达所需的信息。因此,生成的响应是自然而流利的。我们建议将对抗性训练整合在一起,以产生更多类似人类的反应。该模型使用直通胶合胶估计器进行梯度计算。我们还提出了一个两阶段的训练计划,以提高性能。经验结果表明,对抗性训练可以有效地提高自动和人类评估中语言产生的质量。例如,在RNN-LG餐厅数据集中,我们的型号Advnlg在BLEU中优于先前的最新结果3.6%。

The natural language generation (NLG) module in a task-oriented dialogue system produces user-facing utterances conveying required information. Thus, it is critical for the generated response to be natural and fluent. We propose to integrate adversarial training to produce more human-like responses. The model uses Straight-Through Gumbel-Softmax estimator for gradient computation. We also propose a two-stage training scheme to boost performance. Empirical results show that the adversarial training can effectively improve the quality of language generation in both automatic and human evaluations. For example, in the RNN-LG Restaurant dataset, our model AdvNLG outperforms the previous state-of-the-art result by 3.6% in BLEU.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源