论文标题

光束搜索如何改善生成序列标记中的跨度水平置信度估计?

How Does Beam Search improve Span-Level Confidence Estimation in Generative Sequence Labeling?

论文作者

Hashimoto, Kazuma, Naim, Iftekhar, Raman, Karthik

论文摘要

序列标记是IE/IR系统文本理解的核心任务。文本生成模型已越来越成为此类任务的首选解决方案(例如,实体提取和对话插槽填充)。尽管大多数研究都集中在标签准确性上,但关键方面 - 至关重要的重要性 - 在裂缝中滑落:理解模型的信心。更具体地说,我们缺乏对如何可靠地衡量模型对每个标记跨度预测的信心的原则理解。本文旨在为估计生成序列标记的模型置信度提供一些经验见解。最值得注意的是,我们发现仅使用解码器的输出概率\ textbf {不是}实现良好校准的置信度估计的最佳方法。正如对不同任务的六个公共数据集进行了验证,我们表明我们提出的方法(通过梁搜索从顶部$ k $预测中利用统计数据)大大减少了生成序列标记模型的预测的校准错误。

Sequence labeling is a core task in text understanding for IE/IR systems. Text generation models have increasingly become the go-to solution for such tasks (e.g., entity extraction and dialog slot filling). While most research has focused on the labeling accuracy, a key aspect -- of vital practical importance -- has slipped through the cracks: understanding model confidence. More specifically, we lack a principled understanding of how to reliably gauge the confidence of a model in its predictions for each labeled span. This paper aims to provide some empirical insights on estimating model confidence for generative sequence labeling. Most notably, we find that simply using the decoder's output probabilities \textbf{is not} the best in realizing well-calibrated confidence estimates. As verified over six public datasets of different tasks, we show that our proposed approach -- which leverages statistics from top-$k$ predictions by a beam search -- significantly reduces calibration errors of the predictions of a generative sequence labeling model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源