论文标题

信任但验证:通过反事实约束学习分配预测信誉

Trust but Verify: Assigning Prediction Credibility by Counterfactual Constrained Learning

论文作者

Chamon, Luiz F. O., Paternain, Santiago, Ribeiro, Alejandro

论文摘要

以置信区间或概率分布的形式进行的预测信誉度量是统计和机器学习的基础,以表征模型鲁棒性,检测分布样本(离群值)并防止对抗性攻击。为了有效,这些措施应(i)考虑实践中使用的多种模型,(ii)可以计算训练的模型,或者至少避免修改已建立的培训程序,(iii)放弃了数据的使用,可以将它们暴露于与潜在的模型相同的鲁棒性问题和攻击中,并得到理论上的保证。这些原则是在这项工作中开发的框架的基础,该框架表示可信度是一种风险拟合权衡,即,通过扰动模型输入和这种扰动的幅度(风险),可以改善多少适合。使用受约束的优化公式和二元性理论,我们分析了这一折衷方案,并表明可以反合确定这种平衡,而无需测试多个扰动。这导致了一种无监督的,一种后验方法,可以为任何(可能是非凸的)模型分配预测可信度,从基于RKHS的解决方案到任何(FeedForward,FeedForward,卷积,图形)神经网络的任何体系结构。在针对对抗性攻击的数据过滤和防御中说明了其使用。

Prediction credibility measures, in the form of confidence intervals or probability distributions, are fundamental in statistics and machine learning to characterize model robustness, detect out-of-distribution samples (outliers), and protect against adversarial attacks. To be effective, these measures should (i) account for the wide variety of models used in practice, (ii) be computable for trained models or at least avoid modifying established training procedures, (iii) forgo the use of data, which can expose them to the same robustness issues and attacks as the underlying model, and (iv) be followed by theoretical guarantees. These principles underly the framework developed in this work, which expresses the credibility as a risk-fit trade-off, i.e., a compromise between how much can fit be improved by perturbing the model input and the magnitude of this perturbation (risk). Using a constrained optimization formulation and duality theory, we analyze this compromise and show that this balance can be determined counterfactually, without having to test multiple perturbations. This results in an unsupervised, a posteriori method of assigning prediction credibility for any (possibly non-convex) differentiable model, from RKHS-based solutions to any architecture of (feedforward, convolutional, graph) neural network. Its use is illustrated in data filtering and defense against adversarial attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源