论文标题

Cryptocredit:安全培训公平模型

CryptoCredit: Securely Training Fair Models

论文作者

de Castro, Leo, Chen, Jiahao, Polychroniadou, Antigoni

论文摘要

在开发监管决策的模型时,不能使用年龄,种族和性别等敏感特征,并且必须从模型开发人员中掩盖以防止偏见。但是,其余功能仍然需要进行测试,以与敏感功能相关,这只能在这些功能的知识中完成。我们使用完全同态加密方案解决了这一困境,使模型开发人员能够训练线性回归和逻辑回归模型,并测试可能的偏见,而无需揭示Clear中的敏感特征。我们演示了如何将其应用于放置回归测试,并使用成人收入数据集证明我们的方法可以实用。

When developing models for regulated decision making, sensitive features like age, race and gender cannot be used and must be obscured from model developers to prevent bias. However, the remaining features still need to be tested for correlation with sensitive features, which can only be done with the knowledge of those features. We resolve this dilemma using a fully homomorphic encryption scheme, allowing model developers to train linear regression and logistic regression models and test them for possible bias without ever revealing the sensitive features in the clear. We demonstrate how it can be applied to leave-one-out regression testing, and show using the adult income data set that our method is practical to run.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源