论文标题

超越不兼容:在机器学习中相互排斥的公平标准与法律之间的权衡

Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law

论文作者

Zehlike, Meike, Loosley, Alex, Jonsson, Håkan, Wiedemann, Emil, Hacker, Philipp

论文摘要

公平和值得信赖的AI在机器学习和法律领域都变得越来越重要。一个重要的结果是,决策者必须寻求保证“公平”,即非歧视性,算法的决策程序。但是,在现实的事实假设下,有几种相互竞争的算法公平概念被证明是不相容的。例如,这涉及广泛使用的“组内校准”和“正/负类别平衡”的公平度量。在本文中,我们提出了一种新颖的算法(公平的插值方法:FAIM),用于在这三个公平标准之间连续插值。因此,最初的不公平预测可以至少部分地符合各自的公平条件的所需的加权组合。当将算法应用于合成数据,Compas数据集以及来自电子商务领域的新的现实世界数据集时,我们证明了算法的有效性。最后,我们讨论FAIM可以在多大程度上遵守矛盾的法律义务。该分析表明,它可能会在传统法律领域(例如信用评分和刑事司法程序)中运营职责,以及欧盟最新的AI法规,例如《数字市场法》和《最近颁布的AI法》。

Fair and trustworthy AI is becoming ever more important in both machine learning and legal domains. One important consequence is that decision makers must seek to guarantee a 'fair', i.e., non-discriminatory, algorithmic decision procedure. However, there are several competing notions of algorithmic fairness that have been shown to be mutually incompatible under realistic factual assumptions. This concerns, for example, the widely used fairness measures of 'calibration within groups' and 'balance for the positive/negative class'. In this paper, we present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between these three fairness criteria. Thus, an initially unfair prediction can be remedied to, at least partially, meet a desired, weighted combination of the respective fairness conditions. We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector. Finally, we discuss to what extent FAIM can be harnessed to comply with conflicting legal obligations. The analysis suggests that it may operationalize duties in traditional legal fields, such as credit scoring and criminal justice proceedings, but also for the latest AI regulations put forth in the EU, like the Digital Markets Act and the recently enacted AI Act.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源