论文标题
域适应性符合个人公平。他们相处
Domain Adaptation meets Individual Fairness. And they get along
论文作者
论文摘要
算法偏差的许多实例是由分布变化引起的。例如,机器学习(ML)模型通常在培训数据中不足的人口组上表现较差。在本文中,我们利用算法公平性和分布变化之间的这种联系,以表明算法公平性干预措施可以帮助ML模型克服分布变化,并且域适应方法(用于克服分布变化)可以减轻算法偏见。特别是,我们表明(i)执行适当的个人公平概念(如果)可以提高协方差假设下的ML模型的分布精度,并且(ii)可以适应域适应性以实现个人公平性来适应域名适应性。前者是出乎意料的,因为如果没有在分配转移时开发干预措施。后者也是出乎意料的,因为在个人公平文献中,表示并不是常见的方法。
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learning (ML) models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases. In particular, we show that (i) enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models under the covariate shift assumption and that (ii) it is possible to adapt representation alignment methods for domain adaptation to enforce individual fairness. The former is unexpected because IF interventions were not developed with distribution shifts in mind. The latter is also unexpected because representation alignment is not a common approach in the individual fairness literature.