论文标题

局部差异私人图分析的鲁棒性针对中毒

Robustness of Locally Differentially Private Graph Analysis Against Poisoning

论文作者

Imola, Jacob, Chowdhury, Amrita Roy, Chaudhuri, Kamalika

论文摘要

本地差异化私有(LDP)图分析允许在分布在多个用户的图形上的私有分析。但是,这种计算容易受到数据中毒攻击的影响,在这种攻击中,对手可以通过提交畸形数据偏向结果。在本文中,我们正式研究了中毒攻击对LDP下的图度估计方案的影响。我们做出了两个关键的技术贡献。首先,我们观察到不动力学使一项方案更容易受到中毒的影响 - 当对手可以直接毒化其(嘈杂)的反应而不是输入数据时,中毒的影响会更糟。其次,我们观察到图形数据自然是多余的 - 两个用户之间共享每个边缘。利用此数据冗余,我们在LDP下设计了可靠的度估计方案,该协议可以显着降低数据中毒和计算程度估计的影响,以高精度。我们在中毒对现实世界数据集中的中毒攻击下评估了我们提出的健壮度估计方案,以证明其在实践中的功效。

Locally differentially private (LDP) graph analysis allows private analysis on a graph that is distributed across multiple users. However, such computations are vulnerable to data poisoning attacks where an adversary can skew the results by submitting malformed data. In this paper, we formally study the impact of poisoning attacks for graph degree estimation protocols under LDP. We make two key technical contributions. First, we observe LDP makes a protocol more vulnerable to poisoning -- the impact of poisoning is worse when the adversary can directly poison their (noisy) responses, rather than their input data. Second, we observe that graph data is naturally redundant -- every edge is shared between two users. Leveraging this data redundancy, we design robust degree estimation protocols under LDP that can significantly reduce the impact of data poisoning and compute degree estimates with high accuracy. We evaluate our proposed robust degree estimation protocols under poisoning attacks on real-world datasets to demonstrate their efficacy in practice.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源