论文标题
私人随机梯度下降与低噪声
Differentially Private Stochastic Gradient Descent with Low-Noise
论文作者
论文摘要
现代的机器学习算法旨在从数据中提取细粒度的信息,以提供准确的预测,这通常与隐私保护的目标冲突。本文介绍了开发保护隐私机器学习算法的实用和理论上的重要性,这些算法可以在保留隐私时确保良好的性能。在本文中,我们关注差异私有随机梯度下降(SGD)算法的隐私和效用(以多余的风险范围衡量),在随机凸优化的情况下。具体而言,我们在低噪声设置中检查了一个侧面的问题,我们为差异私有SGD算法提供了更尖锐的多余风险范围。在成对的学习环境中,我们提出了一种基于梯度扰动的简单差异私有SGD算法。此外,我们为提出的算法开发了新颖的效用界限,证明即使对于非平滑损失,它也达到了最佳的多余风险率。值得注意的是,我们在低噪声条件下建立了具有隐私性成对学习的快速学习率,这是同类的第一个。
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy. In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization. Specifically, we examine the pointwise problem in the low-noise setting for which we derive sharper excess risk bounds for the differentially private SGD algorithm. In the pairwise learning setting, we propose a simple differentially private SGD algorithm based on gradient perturbation. Furthermore, we develop novel utility bounds for the proposed algorithm, proving that it achieves optimal excess risk rates even for non-smooth losses. Notably, we establish fast learning rates for privacy-preserving pairwise learning under the low-noise condition, which is the first of its kind.