论文标题

Semeval-2022任务11:使用大型预训练的语言模型的Semeval-2022任务11:命名为实体识别

SFE-AI at SemEval-2022 Task 11: Low-Resource Named Entity Recognition using Large Pre-trained Language Models

论文作者

Hou, Changyu, Wang, Jun, Qiao, Yixuan, Jiang, Peng, Gao, Peng, Xie, Guotong, Lin, Qizhi, Wang, Xiaopeng, Jiang, Xiandi, Wang, Benqi, Xiao, Qifeng

论文摘要

大规模训练模型已被广泛用于指定实体识别(NER)任务。但是,通过参数平均或投票的模型合奏不能使不同模型的差异优势完全发挥作用,尤其是在开放型域中。本文介绍了我们的NER系统在Semeval 2022 Task11:Multiconer。我们提出了一个有效的系统,以通过变压器层适应整体训练的语言模型。通过为不同输入的每个模型分配不同的权重,我们采用了变压器层有效地集成了不同模型的优势。实验结果表明,我们的方法在FARSI和荷兰人中取得了出色的表现。

Large scale pre-training models have been widely used in named entity recognition (NER) tasks. However, model ensemble through parameter averaging or voting can not give full play to the differentiation advantages of different models, especially in the open domain. This paper describes our NER system in the SemEval 2022 task11: MultiCoNER. We proposed an effective system to adaptively ensemble pre-trained language models by a Transformer layer. By assigning different weights to each model for different inputs, we adopted the Transformer layer to integrate the advantages of diverse models effectively. Experimental results show that our method achieves superior performances in Farsi and Dutch.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源