论文标题

机器学习中的公平性:反对误报率平等作为公平度量

Fairness in machine learning: against false positive rate equality as a measure of fairness

论文作者

Long, Robert

论文摘要

随着机器学习的信息越来越多,已经提出了不同的指标来衡量算法偏见或不公平。两种普遍的公平措施是校准和假阳性率的平等。每种措施在直觉上似乎很重要,但值得注意的是,通常不可能满足这两种措施。因此,机器学习中的大量文献说明了这两种措施之间的公平权衡。这种框架假定这两种措施实际上都捕获了重要的东西。迄今为止,哲学家尚未检查这种关键假设,并检查了每种措施在多大程度上跟踪规范上重要的财产。这使得这种不可避免的统计冲突,校准和假阳性速率平等之间的重要性,这是伦理的重要主题。在本文中,我给出了一个伦理框架,以思考这些措施,并认为与初始出现相反,误报率平等并没有跟踪任何公平性,因此为评估算法的公平性设定了不相同的标准。

As machine learning informs increasingly consequential decisions, different metrics have been proposed for measuring algorithmic bias or unfairness. Two popular fairness measures are calibration and equality of false positive rate. Each measure seems intuitively important, but notably, it is usually impossible to satisfy both measures. For this reason, a large literature in machine learning speaks of a fairness tradeoff between these two measures. This framing assumes that both measures are, in fact, capturing something important. To date, philosophers have not examined this crucial assumption, and examined to what extent each measure actually tracks a normatively important property. This makes this inevitable statistical conflict, between calibration and false positive rate equality, an important topic for ethics. In this paper, I give an ethical framework for thinking about these measures and argue that, contrary to initial appearances, false positive rate equality does not track anything about fairness, and thus sets an incoherent standard for evaluating the fairness of algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源