论文标题

FIFA:使公平性在经过不平衡数据培训的分类器中更具普遍性

FIFA: Making Fairness More Generalizable in Classifiers Trained on Imbalanced Data

论文作者

Deng, Zhun, Zhang, Jiayao, Zhang, Linjun, Ye, Ting, Coley, Yates, Su, Weijie J., Zou, James

论文摘要

算法公平性在机器学习中起着重要作用,在学习过程中施加公平限制是一种常见的方法。但是,在某些标签类别(例如“健康”)和敏感亚组(例如“老年患者”)中,许多数据集在某些标签类别(例如“健康”)中存在不平衡。从经验上讲,这种不平衡导致不仅缺乏分类的普遍性,而且缺乏公平特性,尤其是在过度参数化模型中。例如,公平感知的培训可以确保培训数据上的均等赔率(EO),但是EO远非对新用户感到满意。在本文中,我们提出了一种理论原理,但灵活的方法,即不平衡 - 意识到(FIFA)。具体而言,FIFA鼓励分类和公平概括,可以灵活地与许多具有基于逻辑的损失的现有公平学习方法相结合。尽管我们的主要重点是EO,但FIFA可以直接应用于实现均衡机会(EQOPT);在某些条件下,它也可以应用于其他公平概念。我们通过将FIFA与流行的公平分类算法相结合来演示FIFA的功能,而所得算法在几个真实世界数据集上实现了更好的公平概括。

Algorithmic fairness plays an important role in machine learning and imposing fairness constraints during learning is a common approach. However, many datasets are imbalanced in certain label classes (e.g. "healthy") and sensitive subgroups (e.g. "older patients"). Empirically, this imbalance leads to a lack of generalizability not only of classification, but also of fairness properties, especially in over-parameterized models. For example, fairness-aware training may ensure equalized odds (EO) on the training data, but EO is far from being satisfied on new users. In this paper, we propose a theoretically-principled, yet Flexible approach that is Imbalance-Fairness-Aware (FIFA). Specifically, FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses. While our main focus is on EO, FIFA can be directly applied to achieve equalized opportunity (EqOpt); and under certain conditions, it can also be applied to other fairness notions. We demonstrate the power of FIFA by combining it with a popular fair classification algorithm, and the resulting algorithm achieves significantly better fairness generalization on several real-world datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源