论文标题

多机构系统中受信任的AI:分布式学习的隐私和安全性概述

Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning

论文作者

Ma, Chuan, Li, Jun, Wei, Kang, Liu, Bo, Ding, Ming, Yuan, Long, Han, Zhu, Poor, H. Vincent

论文摘要

由于分布式最终用户设备(UES)的进步计算能力以及对共享私人数据的越来越关注的激励,人们对机器学习(ML)和人工智能(AI)的兴趣很大,可以在分布式UES上进行处理。具体而言,在此范式中,ML过程的一部分被外包到多个分布式UES,然后将处理后的ML信息汇总在中央服务器的一定级别上,这将集中的ML流程变成了分布式,并带来了可观的好处。但是,这种新的分布式ML范式增加了隐私和安全问题的新风险。在本文中,我们从独特的信息交换水平的唯一角度对新兴的安全性和分布式ML的隐私风险进行了调查,这些调查是根据ML过程的关键步骤定义的,即:I。i)。我们根据当前最新攻击机制的概述,探索和分析每个信息交换水平的威胁潜力,然后讨论针对此类威胁的可能的防御方法。最后,我们通过提供对这个关键领域的未来研究的挑战和可能的方向来完成调查。

Motivated by the advancing computational capacity of distributed end-user equipments (UEs), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs, and then the processed ML information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one, and brings about significant benefits. However, this new distributed ML paradigm raises new risks of privacy and security issues. In this paper, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e.: i) the level of preprocessed data, ii) the level of learning models, iii) the level of extracted knowledge and, iv) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of the current state-of-the-art attack mechanisms, and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源