论文标题

具有轻巧的可信硬件的有效保密机器学习

Efficient Privacy-Preserving Machine Learning with Lightweight Trusted Hardware

论文作者

Huang, Pengzhi, Hoang, Thang, Li, Yueying, Shi, Elaine, Suh, G. Edward

论文摘要

在本文中,我们提出了一个由小型专用安全处理器辅助的新的安全机器学习推理平台,与当今的TEE相比,该平台将更容易保护和部署。我们的平台提供了与最先进的三个主要优势: (i)与最先进的分布式隐私机器学习(PPML)协议相比,我们实现了显着的性能改进,只有一个与离散的安全芯片相当的小型安全处理器,例如可信度的平台模块(TPM)或类似于与Apple Apple Apple Enclave Processor相似的SOC中的Chip Security Security Spystems。在使用WAN/GPU的半honest设置中,我们的方案比Falcon(Popets'21)和Ariann(Popets'22)快4 x-63倍,而3.8 x-12x的沟通效率更高。在恶意环境中,我们实现了更高的性能改善。 (ii)我们的平台在诚实的多数假设下通过中止恶意对手保证了安全性。 (iii)我们的技术不受T恤中安全内存的大小的限制,并且可以支持Resnet18和Transformer等大容量的现代神经网络。 虽然先前的工作调查了PPML中使用高性能TEE的使用,但此工作代表了第一个表明,即使可以仔细设计该协议用于轻巧可信的硬件,也可以利用具有非常有限性能的微型安全硬件。

In this paper, we propose a new secure machine learning inference platform assisted by a small dedicated security processor, which will be easier to protect and deploy compared to today's TEEs integrated into high-performance processors. Our platform provides three main advantages over the state-of-the-art: (i) We achieve significant performance improvements compared to state-of-the-art distributed Privacy-Preserving Machine Learning (PPML) protocols, with only a small security processor that is comparable to a discrete security chip such as the Trusted Platform Module (TPM) or on-chip security subsystems in SoCs similar to the Apple enclave processor. In the semi-honest setting with WAN/GPU, our scheme is 4X-63X faster than Falcon (PoPETs'21) and AriaNN (PoPETs'22) and 3.8X-12X more communication efficient. We achieve even higher performance improvements in the malicious setting. (ii) Our platform guarantees security with abort against malicious adversaries under honest majority assumption. (iii) Our technique is not limited by the size of secure memory in a TEE and can support high-capacity modern neural networks like ResNet18 and Transformer. While previous work investigated the use of high-performance TEEs in PPML, this work represents the first to show that even tiny secure hardware with really limited performance can be leveraged to significantly speed-up distributed PPML protocols if the protocol can be carefully designed for lightweight trusted hardware.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源