论文标题

改善大规模释义和发电

Improving Large-scale Paraphrase Acquisition and Generation

论文作者

Dou, Yao, Jiang, Chao, Xu, Wei

论文摘要

本文解决了现有的基于Twitter的释义数据集中的质量问题,并讨论了使用两个单独的释义定义进行识别和生成任务的必要性。 We present a new Multi-Topic Paraphrase in Twitter (MultiPIT) corpus that consists of a total of 130k sentence pairs with crowdsoursing (MultiPIT_crowd) and expert (MultiPIT_expert) annotations using two different paraphrase definitions for paraphrase identification, in addition to a multi-reference test set (MultiPIT_NMR) and a large automatically constructed training set (MultiPIT_Auto) for释义产生。通过改进的数据注释质量和特定于任务的释义定义,我们数据集上的最佳预训练的语言模型可实现84.2 F1的最先进性能,用于自动释义识别。此外,我们的经验结果还表明,与对其他语料库(例如Quora,Mscoco和paranmt)进行微调相比,对Multipit_auto进行训练的释义生成模型会产生更多样化和高质量的释义。

This paper addresses the quality issues in existing Twitter-based paraphrase datasets, and discusses the necessity of using two separate definitions of paraphrase for identification and generation tasks. We present a new Multi-Topic Paraphrase in Twitter (MultiPIT) corpus that consists of a total of 130k sentence pairs with crowdsoursing (MultiPIT_crowd) and expert (MultiPIT_expert) annotations using two different paraphrase definitions for paraphrase identification, in addition to a multi-reference test set (MultiPIT_NMR) and a large automatically constructed training set (MultiPIT_Auto) for paraphrase generation. With improved data annotation quality and task-specific paraphrase definition, the best pre-trained language model fine-tuned on our dataset achieves the state-of-the-art performance of 84.2 F1 for automatic paraphrase identification. Furthermore, our empirical results also demonstrate that the paraphrase generation models trained on MultiPIT_Auto generate more diverse and high-quality paraphrases compared to their counterparts fine-tuned on other corpora such as Quora, MSCOCO, and ParaNMT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源