论文标题

分配变化下的稳定对抗性学习

Stable Adversarial Learning under Distributional Shifts

论文作者

Liu, Jiashuo, Shen, Zheyan, Cui, Peng, Zhou, Linjun, Kuang, Kun, Li, Bo, Lin, Yishi

论文摘要

由于训练数据中发现的所有相关性,因此在分配变化中,具有经验风险最小化的机器学习算法在分配变化中很容易受到伤害。最近,通过将不确定性集的最坏案例风险最小化来最大程度地限制最糟糕的学习方法。但是,他们同样同样对待所有协变量,以形成决策集,无论其与目标的相关性的稳定性如何它们与目标相关的稳定性。从理论上讲,我们表明我们的方法可以用于基于随机梯度的优化,并为我们的方法提供性能保证。对仿真和实际数据集的实证研究鉴定了我们方法在未知分布转移的均匀性能方面的有效性。

Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data. Recently, there are robust learning methods aiming at this problem by minimizing the worst-case risk over an uncertainty set. However, they equally treat all covariates to form the decision sets regardless of the stability of their correlations with the target, resulting in the overwhelmingly large set and low confidence of the learner.In this paper, we propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target. We theoretically show that our method is tractable for stochastic gradient-based optimization and provide the performance guarantees for our method. Empirical studies on both simulation and real datasets validate the effectiveness of our method in terms of uniformly good performance across unknown distributional shifts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源