论文标题

懒惰的神经元现象:关于变压器激活稀疏性的出现

The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in Transformers

论文作者

Li, Zonglin, You, Chong, Bhojanapalli, Srinadh, Li, Daliang, Rawat, Ankit Singh, Reddi, Sashank J., Ye, Ke, Chern, Felix, Yu, Felix, Guo, Ruiqi, Kumar, Sanjiv

论文摘要

本文研究了具有变压器体系结构的机器学习模型的好奇现象,即它们的激活图很少。通过激活图,我们指的是在relu激活函数之后多层感知器(MLP)的中间输出,而稀疏表示,对于每个输入,对于MLP,平均而言,平均很少的条目(例如,T5基数为3.0%,VIT-B16的6.3%,VIT-B16的6.3%)是非零的。此外,通过非零条目的百分比来衡量,具有更多层和更宽的MLP隐藏尺寸的大型变压器更稀疏。通过广泛的实验,我们证明了稀疏性的出现是一种普遍的现象,用于自然语言处理和视觉任务,在训练和评估数据,各种配置的变压器,所有深度层的层次以及其他体系结构(包括MLP-Mixers和2层MLP)上都发生。我们表明,稀疏性还使用带有随机标签或随机输入或无限数据的训练数据集出现,表明稀疏性不是特定数据集的结果。我们讨论稀疏性如何立即意味着一种显着降低掉落数量并提高变压器效率的方法。此外,我们也许表明了令人惊讶的是,通过TOP-K阈值实施更稀疏的激活,其较小的值k带来了对变压器的所需但缺失的属性,即对噪声训练数据的敏感性较小,对输入腐败的更强性以及更好的预测信心的校准。

This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse. By activation map we refer to the intermediate output of the multi-layer perceptrons (MLPs) after a ReLU activation function, and by sparse we mean that on average very few entries (e.g., 3.0% for T5-Base and 6.3% for ViT-B16) are nonzero for each input to MLP. Moreover, larger Transformers with more layers and wider MLP hidden dimensions are sparser as measured by the percentage of nonzero entries. Through extensive experiments we demonstrate that the emergence of sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks, on both training and evaluation data, for Transformers of various configurations, at layers of all depth levels, as well as for other architectures including MLP-mixers and 2-layer MLPs. We show that sparsity also emerges using training datasets with random labels, or with random inputs, or with infinite amount of data, demonstrating that sparsity is not a result of a specific family of datasets. We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers. Moreover, we demonstrate perhaps surprisingly that enforcing an even sparser activation via Top-k thresholding with a small value of k brings a collection of desired but missing properties for Transformers, namely less sensitivity to noisy training data, more robustness to input corruptions, and better calibration for their prediction confidence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源