论文标题
语言模型在算术和符号归纳中的局限性
Limitations of Language Models in Arithmetic and Symbolic Induction
论文作者
论文摘要
最近的工作表明,大型的语言模型(LMS)不仅可以在一系列自然语言处理(NLP)任务上表现出色,而且还可以开始改进推理任务,例如算术诱导,象征性操纵,并随着模型大小的增加而进行常识推理。但是,目前尚不清楚这些LMS的基本功能是什么。令人惊讶的是,我们发现这些模型对某些基本的符号操纵任务有局限性,例如复制,反向和加法。当符号总数或重复符号增加时,模型性能会迅速下降。我们研究了这种现象背后的潜在原因,并检查了一组可能的方法,包括明确的位置标记,细粒度的计算步骤以及具有可呼出程序的LMS。实验结果表明,这些技术都无法完全解决最简单的添加感应问题。最后,我们向辅导员介绍LMS,这展示了每一个教学的步骤。 LMS带有导师的LMS能够在OOD和重复符号的情况下提供100%的精度,从而在诱导中对大型LMS边界提供了新的见解。
Recent work has shown that large pretrained Language Models (LMs) can not only perform remarkably well on a range of Natural Language Processing (NLP) tasks but also start improving on reasoning tasks such as arithmetic induction, symbolic manipulation, and commonsense reasoning with increasing size of models. However, it is still unclear what the underlying capabilities of these LMs are. Surprisingly, we find that these models have limitations on certain basic symbolic manipulation tasks such as copy, reverse, and addition. When the total number of symbols or repeating symbols increases, the model performance drops quickly. We investigate the potential causes behind this phenomenon and examine a set of possible methods, including explicit positional markers, fine-grained computation steps, and LMs with callable programs. Experimental results show that none of these techniques can solve the simplest addition induction problem completely. In the end, we introduce LMs with tutor, which demonstrates every single step of teaching. LMs with tutor is able to deliver 100% accuracy in situations of OOD and repeating symbols, shedding new insights on the boundary of large LMs in induction.