论文标题

从预处理到下游任务的对抗性鲁棒性转移

On Transfer of Adversarial Robustness from Pretraining to Downstream Tasks

论文作者

Nern, Laura Fee, Raj, Harsh, Georgi, Maurice, Sharma, Yash

论文摘要

随着大规模培训制度的流行,使用预验证的模型进行下游任务已成为机器学习中的普遍实践。虽然训练训练已被证明可以在实践中提高模型的性能,但鲁棒性特性从预训练到下游任务的转移仍然鲜为人知。在这项研究中,我们证明了下游任务上线性预测变量的鲁棒性可以受到其潜在表示的鲁棒性的约束,而与预读的方案无关。我们证明了(i)对独立于任何下游任务的损失的约束,以及(ii)尤其是鲁棒分类的标准。我们在实际应用中验证了我们的理论结果,展示了我们的结果如何用于校准下游鲁棒性的期望,以及何时我们的结果对最佳传递学习有用。综上所述,我们的结果为表征可靠的适应后绩效的表示功能的要求提供了第一步。

As large-scale training regimes have gained popularity, the use of pretrained models for downstream tasks has become common practice in machine learning. While pretraining has been shown to enhance the performance of models in practice, the transfer of robustness properties from pretraining to downstream tasks remains poorly understood. In this study, we demonstrate that the robustness of a linear predictor on downstream tasks can be constrained by the robustness of its underlying representation, regardless of the protocol used for pretraining. We prove (i) a bound on the loss that holds independent of any downstream task, as well as (ii) a criterion for robust classification in particular. We validate our theoretical results in practical applications, show how our results can be used for calibrating expectations of downstream robustness, and when our results are useful for optimal transfer learning. Taken together, our results offer an initial step towards characterizing the requirements of the representation function for reliable post-adaptation performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源