论文标题
多文件汇总,基于质心预后
Multi-Document Summarization with Centroid-Based Pretraining
论文作者
论文摘要
在多文件摘要(MDS)中,可以将输入建模为一组文档,而输出是其摘要。在本文中,我们着重于为MD的预处理目标。具体而言,我们引入了一个新颖的预刻录目标,其中涉及选择每个文档群集的基于胭脂的质心作为摘要的代理。因此,我们的目标不需要人类的书面摘要,可以在仅由文档集组成的数据集上进行预处理。通过在多个MDS数据集上进行零射击,很少射击和完全监督的实验,我们表明我们的模型中心与最新模型更好或可媲美。我们将研究社区自由使用https://github.com/ratishsp/centrum提供了预处理和微调的模型。
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community https://github.com/ratishsp/centrum.