论文标题
凸:数据效率和少量插槽标签
ConVEx: Data-Efficient and Few-Shot Slot Labeling
论文作者
论文摘要
我们提出了凸(会话价值提取器),这是一种有效的训练和微调神经方法,用于插槽标记对话框任务。与其依赖于先前工作(例如语言建模,响应选择)的更一般的预处理目标,不如使用REDDIT数据的新型成对任务,而是与序列标记任务上的预期用法保持一致。这使学习域特异性的插槽标记器通过简单地对验证的通用序列标记模型进行微调解码层,而大多数预审计的模型的参数都被冷冻。我们报告了在一系列不同的域和数据集中,凸面的最新性能,用于对话插槽标签,其收益最大,最具挑战性,很少的射击设置。我们认为,凸的减少时间(即,在12 GPU上只有18个小时)和成本,以及其有效的微调和强大的性能,保证了对数据有效序列标记任务的更广泛的可移植性和可扩展性。
We propose ConVEx (Conversational Value Extractor), an efficient pretraining and fine-tuning neural approach for slot-labeling dialog tasks. Instead of relying on more general pretraining objectives from prior work (e.g., language modeling, response selection), ConVEx's pretraining objective, a novel pairwise cloze task using Reddit data, is well aligned with its intended usage on sequence labeling tasks. This enables learning domain-specific slot labelers by simply fine-tuning decoding layers of the pretrained general-purpose sequence labeling model, while the majority of the pretrained model's parameters are kept frozen. We report state-of-the-art performance of ConVEx across a range of diverse domains and data sets for dialog slot-labeling, with the largest gains in the most challenging, few-shot setups. We believe that ConVEx's reduced pretraining times (i.e., only 18 hours on 12 GPUs) and cost, along with its efficient fine-tuning and strong performance, promise wider portability and scalability for data-efficient sequence-labeling tasks in general.