论文标题

DPAUC:联合学习中的私人AUC计算

DPAUC: Differentially Private AUC Computation in Federated Learning

论文作者

Sun, Jiankai, Yang, Xin, Yao, Yuanshun, Xie, Junyuan, Wu, Di, Wang, Chong

论文摘要

Federated学习(FL)最近作为增强隐私工具的工具引起了极大的关注,可以由多个参与者共同培训机器学习模型。 FL的先前工作主要研究了如何在模型培训期间保护标签隐私。但是,FL中的模型评估也可能导致专用标签信息的潜在泄漏。在这项工作中,我们提出了一种评估算法,该算法可以准确计算使用FL中的标签差异隐私(DP)时,可以准确计算广泛使用的AUC(曲线下)度量。通过广泛的实验,我们显示我们的算法可以计算与地面真相相比的准确AUC。该代码可在{\ url {https://github.com/bytedance/fedlearner/tree/master/master/example/example/privacy/dpauc}}中获得。

Federated learning (FL) has gained significant attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple participants. The prior work on FL has mostly studied how to protect label privacy during model training. However, model evaluation in FL might also lead to potential leakage of private label information. In this work, we propose an evaluation algorithm that can accurately compute the widely used AUC (area under the curve) metric when using the label differential privacy (DP) in FL. Through extensive experiments, we show our algorithms can compute accurate AUCs compared to the ground truth. The code is available at {\url{https://github.com/bytedance/fedlearner/tree/master/example/privacy/DPAUC}}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源