论文标题

深线性网络会在浅层网络时良性过高

Deep Linear Networks can Benignly Overfit when Shallow Ones Do

论文作者

Chatterji, Niladri S., Long, Philip M.

论文摘要

我们限制了使用梯度流训练的深度线性网络的多余风险。在先前用于建立最小$ \ ell_2 $ -norm interpolant的风险范围的设置中,我们表明,随机初始化的深度线性网络可以与最小$ \ ell_2 $ -norm interpolant的最近似甚至匹配。我们的分析还表明,插值深线性模型的条件方差与最小$ \ ell_2 $ -norm解决方案完全相同。由于噪声仅通过条件差异影响多余的风险,因此这意味着深度并不能提高算法“隐藏噪声”的能力。我们的模拟验证了我们边界的各个方面反映了简单数据分布的典型行为。我们还发现,尽管情况更加细微,但在具有Relu网络的模拟中也可以看到类似的现象。

We bound the excess risk of interpolating deep linear networks trained using gradient flow. In a setting previously used to establish risk bounds for the minimum $\ell_2$-norm interpolant, we show that randomly initialized deep linear networks can closely approximate or even match known bounds for the minimum $\ell_2$-norm interpolant. Our analysis also reveals that interpolating deep linear models have exactly the same conditional variance as the minimum $\ell_2$-norm solution. Since the noise affects the excess risk only through the conditional variance, this implies that depth does not improve the algorithm's ability to "hide the noise". Our simulations verify that aspects of our bounds reflect typical behavior for simple data distributions. We also find that similar phenomena are seen in simulations with ReLU networks, although the situation there is more nuanced.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源