论文标题

测量单词级神经网络中的记忆效果探测

Measuring Memorization Effect in Word-Level Neural Networks Probing

论文作者

Rosa, Rudolf, Musil, Tomáš, Mareček, David

论文摘要

多项研究探讨了在培训端到端NLP任务的神经网络中出现的表示形式,并检查了代表中可以编码哪些单词级语言信息。在经典探测中,对分类器进行了培训,以提取目标语言信息的表示。但是,分类器的威胁只是记住单个单词的语言标签,而不是从表示形式中提取语言抽象,从而报告假阳性结果。尽管已做出了大量努力来最大程度地减少记忆问题,但到目前为止,已经对分类器中发生的记忆量进行了实际衡量的任务。在我们的工作中,我们提出了一种简单的一般方法来测量记忆效应,基于对称的可比测试词的对称选择,而在训练中看不见。我们的方法可用于明确量化探测设置中发生的记忆量,以便可以选择适当的设置,并可以通过可靠性估算来解释探测的结果。我们通过在训练有素的神经机器翻译编码器中展示我们对部分语音的探测案例研究的方法来体现这一点。

Multiple studies have probed representations emerging in neural networks trained for end-to-end NLP tasks and examined what word-level linguistic information may be encoded in the representations. In classical probing, a classifier is trained on the representations to extract the target linguistic information. However, there is a threat of the classifier simply memorizing the linguistic labels for individual words, instead of extracting the linguistic abstractions from the representations, thus reporting false positive results. While considerable efforts have been made to minimize the memorization problem, the task of actually measuring the amount of memorization happening in the classifier has been understudied so far. In our work, we propose a simple general method for measuring the memorization effect, based on a symmetric selection of comparable sets of test words seen versus unseen in training. Our method can be used to explicitly quantify the amount of memorization happening in a probing setup, so that an adequate setup can be chosen and the results of the probing can be interpreted with a reliability estimate. We exemplify this by showcasing our method on a case study of probing for part of speech in a trained neural machine translation encoder.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源