论文标题
部分可观测时空混沌系统的无模型预测
Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
论文作者
论文摘要
The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations.尽管已经有一系列的工作来改善算法公平,但其中大多数都在相同的培训和测试分布的假设下。但是,在许多现实世界中,这种假设经常被违反,因为以前训练的公平模型通常被部署在不同的环境中,并且已经观察到这种模型的公平性会崩溃。在本文中,我们研究了如何在分配变化下转移模型公平性,这是实践中普遍存在的问题。我们对公平模型在不同类型的分布变化下如何影响公平模型进行了细粒度分析,并发现域移位比亚群偏移更具挑战性。受到自我训练在域移动下转移准确性方面的成功的启发,我们得出了一个足够的条件,可以转移群体公平。在它的指导下,我们提出了一种实用算法,其一致性正则化为关键组成部分。涵盖所有类型的分布变化的合成数据集基准,用于实验理论发现的实验验证。包括图像和表格数据在内的合成和真实数据集的实验表明,我们的方法在各种分布变化下有效地传递了公平和准确性。
The increasing reliance on ML models in high-stakes tasks has raised a major concern on fairness violations. Although there has been a surge of work that improves algorithmic fairness, most of them are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and the fairness of such models has been observed to collapse. In this paper, we study how to transfer model fairness under distribution shifts, a widespread issue in practice. We conduct a fine-grained analysis of how the fair model is affected under different types of distribution shifts and find that domain shifts are more challenging than subpopulation shifts. Inspired by the success of self-training in transferring accuracy under domain shifts, we derive a sufficient condition for transferring group fairness. Guided by it, we propose a practical algorithm with a fair consistency regularization as the key component. A synthetic dataset benchmark, which covers all types of distribution shifts, is deployed for experimental verification of the theoretical findings. Experiments on synthetic and real datasets including image and tabular data demonstrate that our approach effectively transfers fairness and accuracy under various distribution shifts.