论文标题
稀疏本地Lipschitz预测变量的对抗性鲁棒性
Adversarial robustness of sparse local Lipschitz predictors
论文作者
论文摘要
这项工作研究了由线性预测指标和非线性表示图组成的参数函数的对抗鲁棒性。满足某些稳定条件的%。我们的分析依赖于\ emph {稀疏的本地Lipschitzness}(SLL),这是局部Lipschitz连续性的扩展,可以更好地捕获稳定性,并降低了局部扰动对预测因子的有效维度。 SLL函数保留了一定程度的结构,该结构由表示图中的稀疏模式给出,并包括几个流行的假设类别,例如零件线性模型,lasso及其变体以及深馈forderward \ relu网络。 %是稀疏的当地Lipschitz。我们在对抗性示例的最小能量上提供了更严格的鲁棒性证书,并在这些预测变量的鲁棒概括误差上更加依赖数据依赖性的非均匀界限。我们将这些结果实例化,以实现深层神经网络的情况,并提供支持我们结果的数值证据,从而提供了对自然正规化策略的新见解,以提高这些模型的鲁棒性。
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a non-linear representation map. % that satisfies certain stability condition. Our analysis relies on \emph{sparse local Lipschitzness} (SLL), an extension of local Lipschitz continuity that better captures the stability and reduced effective dimensionality of predictors upon local perturbations. SLL functions preserve a certain degree of structure, given by the sparsity pattern in the representation map, and include several popular hypothesis classes, such as piece-wise linear models, Lasso and its variants, and deep feed-forward \relu networks. % are sparse local Lipschitz. We provide a tighter robustness certificate on the minimal energy of an adversarial example, as well as tighter data-dependent non-uniform bounds on the robust generalization error of these predictors. We instantiate these results for the case of deep neural networks and provide numerical evidence that supports our results, shedding new insights into natural regularization strategies to increase the robustness of these models.