论文标题

为差异私人学习的自我监督预处理

Self-Supervised Pretraining for Differentially Private Learning

论文作者

Asadian, Arash, Weidner, Evan, Jiang, Lei

论文摘要

我们证明自我监督预审计(SSP)是对图像分类中可用的公共数据集的大小的可扩展解决方案(DP)。当面对缺乏公共数据集时,我们显示了SSP在一个单个图像上生成的功能,使私人分类器能够获得比在相同隐私预算下的非学习手工制作的功能更好的实用程序。当可用中等或大尺寸的公共数据集时,SSP生成的功能在相同的私人预算下训练了各种复杂的私人数据集上的标签训练的功能。我们还比较了多个支持DP的培训框架,以培训私人分类器SSP生成的功能。最后,当$ε= 3 $时,我们报告了一个非平凡的实用程序25.3 \%。我们的源代码可以在\ url {https://github.com/inchartedrab/ssp}中找到。

We demonstrate self-supervised pretraining (SSP) is a scalable solution to deep learning with differential privacy (DP) regardless of the size of available public datasets in image classification. When facing the lack of public datasets, we show the features generated by SSP on only one single image enable a private classifier to obtain much better utility than the non-learned handcrafted features under the same privacy budget. When a moderate or large size public dataset is available, the features produced by SSP greatly outperform the features trained with labels on various complex private datasets under the same private budget. We also compared multiple DP-enabled training frameworks to train a private classifier on the features generated by SSP. Finally, we report a non-trivial utility 25.3\% of a private ImageNet-1K dataset when $ε=3$. Our source code can be found at \url{https://github.com/UnchartedRLab/SSP}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源