论文标题

整体句子嵌入,以更好地分布检测

Holistic Sentence Embeddings for Better Out-of-Distribution Detection

论文作者

Chen, Sishuo, Bi, Xiaohan, Gao, Rundong, Sun, Xu

论文摘要

检测到分布(OOD)实例对于安全部署NLP模型至关重要。在基于预处理的语言模型(PLM)的最新文本OOD检测工作中,基于距离的方法表现出了出色的性能。但是,它们估计了最后一层CLS嵌入空间中的样本距离得分,因此不会充分利用PLM中的语言信息。为了解决这个问题,我们建议通过得出更多整体句子嵌入来增强OOD检测。基于令牌平均和层组合有助于改进OOD检测的观察,我们提出了一种名为AVG-AVG的简单嵌入方法,该方法将每个中间层的所有令牌表示为嵌入的句子,并显着超过9.33 far 9.33 far 9.35 rangians benchmarks的目前,并显着超越了目前的综合套件。此外,我们的分析表明,它确实有助于保留经过微调的PLM中的一般语言知识,并实质上有益于检测背景变化。简单而有效的嵌入方法可以应用于额外的额外成本可忽略不计的微调PLM,从而在OOD检测中获得了自由增益。我们的代码可在https://github.com/lancopku/avg-avg上找到。

Detecting out-of-distribution (OOD) instances is significant for the safe deployment of NLP models. Among recent textual OOD detection works based on pretrained language models (PLMs), distance-based methods have shown superior performance. However, they estimate sample distance scores in the last-layer CLS embedding space and thus do not make full use of linguistic information underlying in PLMs. To address the issue, we propose to boost OOD detection by deriving more holistic sentence embeddings. On the basis of the observations that token averaging and layer combination contribute to improving OOD detection, we propose a simple embedding approach named Avg-Avg, which averages all token representations from each intermediate layer as the sentence embedding and significantly surpasses the state-of-the-art on a comprehensive suite of benchmarks by a 9.33% FAR95 margin. Furthermore, our analysis demonstrates that it indeed helps preserve general linguistic knowledge in fine-tuned PLMs and substantially benefits detecting background shifts. The simple yet effective embedding method can be applied to fine-tuned PLMs with negligible extra costs, providing a free gain in OOD detection. Our code is available at https://github.com/lancopku/Avg-Avg.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源