论文标题
通过深网修剪进行隐私学习
Privacy-preserving Learning via Deep Net Pruning
论文作者
论文摘要
本文试图回答一个问题,即神经网络修剪是否可以用作实现差异隐私而不会丢失大量数据实用程序的工具。作为理解神经网络修剪和差异隐私之间关系的第一步,本文证明,修剪神经网络的给定层相当于将一定量的差异私有噪声添加到其隐藏的层激活中。本文还提出了实验结果,以显示简单实践环境中理论发现和关键参数值的实际含义。这些结果表明,神经网络修剪可以是为神经网络添加差异私有噪声的更有效替代方法。
This paper attempts to answer the question whether neural network pruning can be used as a tool to achieve differential privacy without losing much data utility. As a first step towards understanding the relationship between neural network pruning and differential privacy, this paper proves that pruning a given layer of the neural network is equivalent to adding a certain amount of differentially private noise to its hidden-layer activations. The paper also presents experimental results to show the practical implications of the theoretical finding and the key parameter values in a simple practical setting. These results show that neural network pruning can be a more effective alternative to adding differentially private noise for neural networks.