论文标题

迈向统一的及时调整,以进行几次弹头文本分类

Towards Unified Prompt Tuning for Few-shot Text Classification

论文作者

Wang, Jianing, Wang, Chengyu, Luo, Fuli, Tan, Chuanqi, Qiu, Minghui, Yang, Fei, Shi, Qiuhui, Huang, Songfang, Gao, Ming

论文摘要

基于及时的微调通过使用特定于任务的提示来提高了几乎没有弹出文本分类的预训练语言模型(PLM)的性能。但是,PLM在预训练期间不熟悉迅速的表达方式,这限制了下游任务的几次学习表现。如果模型可以在适应特定NLP任务之前获得一些提示知识,那将是可取的。我们介绍了统一的提示(UPT)框架,通过明确捕获非目标NLP数据集的提示语义,从而为Bert式模型提供了更好的伯特风格模型。在UPT中,提出了一种新颖的及时范围 - 言论者,用于跨不同NLP任务的联合提示学习,迫使PLM捕获了任务不变的提示知识。我们进一步设计了一个名为“知识增强的掩盖语言建模”的自我监督任务,以提高PLM的概括能力,以准确适应以前看不见的任务。在跨多个任务进行了多任务学习之后,可以更好地对低资源设置中的任何不同目标任务进行更好的调整。对各种NLP任务的实验表明,UPT始终优于最先进的基于基于及时的微调。

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts. Yet, PLMs are unfamiliar with prompt-style expressions during pre-training, which limits the few-shot learning performance on downstream tasks. It would be desirable if the models can acquire some prompting knowledge before adaptation to specific NLP tasks. We present the Unified Prompt Tuning (UPT) framework, leading to better few-shot text classification for BERT-style models by explicitly capturing prompting semantics from non-target NLP datasets. In UPT, a novel paradigm Prompt-Options-Verbalizer is proposed for joint prompt learning across different NLP tasks, forcing PLMs to capture task-invariant prompting knowledge. We further design a self-supervised task named Knowledge-enhanced Selective Masked Language Modeling to improve the PLM's generalization abilities for accurate adaptation to previously unseen tasks. After multi-task learning across multiple tasks, the PLM can be better prompt-tuned towards any dissimilar target tasks in low-resourced settings. Experiments over a variety of NLP tasks show that UPT consistently outperforms state-of-the-arts for prompt-based fine-tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源