论文标题

断头台正则化:为什么需要删除层来改善自我监督学习的概括

Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning

论文作者

Bordes, Florian, Balestriero, Randall, Garrido, Quentin, Bardes, Adrien, Vincent, Pascal

论文摘要

近年来出现的一种意外的技术包括使用自我监管的学习(SSL)方法培训深层网络(DN),并在下游任务上使用此网络,但其最后几个投影仪层已完全删除。扔掉投影仪的这种窍门实际上对于SSL方法在ImageNet上显示竞争性能的方法至关重要,以这种方式可以获得30个百分点以上。这有点令人烦恼,因为人们希望在训练期间SSL标准(最后一个投影仪层)明确执行不变性的网络层应该是用于下游最佳概括性能的一个。但这似乎并非如此,这项研究阐明了原因。我们将这种技巧称为断头台正则化(GR),实际上是一种普遍适用的方法,用于改善转移学习方案中的概括性能。在这项工作中,我们确定了成功背后的根本原因,并表明要使用的最佳层可能会大大变化,具体取决于训练设置,数据或下游任务。最后,我们通过对齐借口SSL任务和下游任务来提供一些有关如何减少SSL投影仪的需求的见解。

One unexpected technique that emerged in recent years consists in training a Deep Network (DN) with a Self-Supervised Learning (SSL) method, and using this network on downstream tasks but with its last few projector layers entirely removed. This trick of throwing away the projector is actually critical for SSL methods to display competitive performances on ImageNet for which more than 30 percentage points can be gained that way. This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last projector layer) should be the one to use for best generalization performance downstream. But it seems not to be, and this study sheds some light on why. This trick, which we name Guillotine Regularization (GR), is in fact a generically applicable method that has been used to improve generalization performance in transfer learning scenarios. In this work, we identify the underlying reasons behind its success and show that the optimal layer to use might change significantly depending on the training setup, the data or the downstream task. Lastly, we give some insights on how to reduce the need for a projector in SSL by aligning the pretext SSL task and the downstream task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源