论文标题

通过学习模糊数据来防御梯度泄漏攻击

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

论文作者

Wan, Yuxuan, Xu, Han, Liu, Xiaorui, Ren, Jie, Fan, Wenqi, Tang, Jiliang

论文摘要

联合学习被认为是一种有效的隐私学习机制,可将客户的数据和模型培训过程分开。但是,由于存在故意进行梯度泄漏攻击以重建客户数据的攻击者的存在,联邦学习仍然处于隐私泄漏的风险。最近,已经提出了诸如梯度扰动方法和输入加密方法之类的流行策略来防御梯度泄漏攻击。然而,这些防御能力可以极大地牺牲模型性能,或者被更先进的攻击所避免。在本文中,我们提出了一种新的防御方法,以通过学习模糊数据来保护客户数据的隐私。我们的防御方法可以生成与原始样品完全不同的合成样本,但它们也可以最大程度地保留其预测特征并保证模型性能。此外,我们的防御策略使梯度泄漏攻击及其变体非常困难地重建客户数据。通过广泛的实验,我们表明我们提出的防御方法获得了更好的隐私保护,同时与最先进的方法相比,保持了高准确性。

Federated learning is considered as an effective privacy-preserving learning mechanism that separates the client's data and model training process. However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data. Recently, popular strategies such as gradient perturbation methods and input encryption methods have been proposed to defend against gradient leakage attacks. Nevertheless, these defenses can either greatly sacrifice the model performance, or be evaded by more advanced attacks. In this paper, we propose a new defense method to protect the privacy of clients' data by learning to obscure data. Our defense method can generate synthetic samples that are totally distinct from the original samples, but they can also maximally preserve their predictive features and guarantee the model performance. Furthermore, our defense strategy makes the gradient leakage attack and its variants extremely difficult to reconstruct the client data. Through extensive experiments, we show that our proposed defense method obtains better privacy protection while preserving high accuracy compared with state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源