论文标题

在几个命名的实体识别中对预训练的模型和原型神经网络的对帐

Reconciliation of Pre-trained Models and Prototypical Neural Networks in Few-shot Named Entity Recognition

论文作者

Huang, Youcheng, Lei, Wenqiang, Fu, Jie, Lv, Jiancheng

论文摘要

将大规模的预训练模型与原型神经网络结合在一起,是一个名为“实体识别”的事实上的范式。不幸的是,现有的方法并不意识到,预训练模型的嵌入包含有关单词频率的大量信息,使原型神经网络与学习单词实体有偏见。这种差异限制了两个模型的协同作用。因此,我们提出了一种单线代码归一化方法,以使这种不匹配与经验和理论基础调和。我们基于九个基准数据集的实验显示了我们方法比对应模型的优越性,并且与最新方法相媲美。除了增强模型外,我们的工作还提供了一个分析观点,可以解决少数名称实体识别或其他依赖于预训练的模型或原型神经网络的任务中的一般问题。

Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition. Existing methods, unfortunately, are not aware of the fact that embeddings from pre-trained models contain a prominently large amount of information regarding word frequencies, biasing prototypical neural networks against learning word entities. This discrepancy constrains the two models' synergy. Thus, we propose a one-line-code normalization method to reconcile such a mismatch with empirical and theoretical grounds. Our experiments based on nine benchmark datasets show the superiority of our method over the counterpart models and are comparable to the state-of-the-art methods. In addition to the model enhancement, our work also provides an analytical viewpoint for addressing the general problems in few-shot name entity recognition or other tasks that rely on pre-trained models or prototypical neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源