论文标题
使用尺寸降低的有效NTK
Efficient NTK using Dimensionality Reduction
论文作者
论文摘要
最近,神经切线核(NTK)已被用来以宽度宽度极限解释神经网络的学习参数的动力学。 NTK的定量分析产生了网络宽度,这些宽度通常是不切实际的,并且在培训和部署中都会产生高成本和精力。使用矩阵分解技术,我们展示了如何获得与先前分析获得的相似保证,同时降低培训和推理资源成本。当输入点的数据维度与输入点数量相同时,我们的结果的重要性进一步增加。更普遍地,我们的工作建议如何分析大宽度网络,其中密集的线性层被低复杂性分解所取代,从而降低了对大宽度的重度。
Recently, neural tangent kernel (NTK) has been used to explain the dynamics of learning parameters of neural networks, at the large width limit. Quantitative analyses of NTK give rise to network widths that are often impractical and incur high costs in time and energy in both training and deployment. Using a matrix factorization technique, we show how to obtain similar guarantees to those obtained by a prior analysis while reducing training and inference resource costs. The importance of our result further increases when the input points' data dimension is in the same order as the number of input points. More generally, our work suggests how to analyze large width networks in which dense linear layers are replaced with a low complexity factorization, thus reducing the heavy dependence on the large width.