论文标题
分析嵌入空间中的变压器
Analyzing Transformers in Embedding Space
论文作者
论文摘要
理解基于变压器的模型引起了极大的关注,因为它们是机器学习最近技术进步的核心。尽管大多数可解释性方法都依赖于输入的运行模型,但最近的工作表明,零通的方法,即直接解释参数而无需前进/向后传递,对于某些变压器参数是可行的,对于两层的注意力网络是可行的。在这项工作中,我们提出了一个理论分析,其中通过将其投影到嵌入式空间(即它们操作的词汇量的空间)中来解释训练有素的变压器的所有参数。我们得出了一个简单的理论框架来支持我们的论点,并为其有效性提供了充足的证据。首先,经验分析表明,可以在嵌入空间中解释审计和微调模型的参数。其次,我们提出了框架的两个应用:(a)将共享词汇的不同模型的参数对齐,以及(b)在不通过``翻译''''''''''''''转换''``'''分类器的参数的参数的参数与不同模型参数的参数进行对齐。总体而言,我们的发现为解释方法打开了大门,至少部分地从模型细节中抽象出来,仅在嵌入空间中运行。
Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention networks. In this work, we present a theoretical analysis where all parameters of a trained Transformer are interpreted by projecting them into the embedding space, that is, the space of vocabulary items they operate on. We derive a simple theoretical framework to support our arguments and provide ample evidence for its validity. First, an empirical analysis showing that parameters of both pretrained and fine-tuned models can be interpreted in embedding space. Second, we present two applications of our framework: (a) aligning the parameters of different models that share a vocabulary, and (b) constructing a classifier without training by ``translating'' the parameters of a fine-tuned classifier to parameters of a different model that was only pretrained. Overall, our findings open the door to interpretation methods that, at least in part, abstract away from model specifics and operate in the embedding space only.