论文标题
Spliteasy:一种在移动设备上培训ML模型的实用方法
SplitEasy: A Practical Approach for Training ML models on Mobile Devices
论文作者
论文摘要
现代移动设备虽然足智多谋,但如果没有服务器的帮助,这可能会访问可能对隐私敏感的用户数据。 Split学习最近已成为培训低功率移动设备上的复杂深度学习(DL)模型的有前途的技术。该技术背后的核心思想是在移动设备上训练DL模型的敏感层,同时将计算密集型层卸载到服务器上。尽管许多作品已经探索了在模拟设置中分裂学习的有效性,但为此目的不存在可用的工具包。在这项工作中,我们重点介绍了需要解决的理论和技术挑战,以开发一个功能框架,该功能框架可以在移动设备中训练ML模型,而无需将原始数据传输到服务器。我们提出了专注于这些挑战,我们提出了Spliteasy,这是一个使用分裂学习在移动设备上训练ML模型的框架。使用Spliteasy提供的抽象,开发人员可以通过最小的修改在分裂学习设置下运行各种DL模型。我们提供了分裂性的详细说明,并使用六个最先进的神经网络进行实验。我们证明了分裂的训练模型如何仅由移动设备训练,同时每个数据样本几乎恒定时间。
Modern mobile devices, although resourceful, cannot train state-of-the-art machine learning models without the assistance of servers, which require access to, potentially, privacy-sensitive user data. Split learning has recently emerged as a promising technique for training complex deep learning (DL) models on low-powered mobile devices. The core idea behind this technique is to train the sensitive layers of a DL model on mobile devices while offloading the computationally intensive layers to a server. Although a lot of works have already explored the effectiveness of split learning in simulated settings, a usable toolkit for this purpose does not exist. In this work, we highlight the theoretical and technical challenges that need to be resolved to develop a functional framework that trains ML models in mobile devices without transferring raw data to a server. Focusing on these challenges, we propose SplitEasy, a framework for training ML models on mobile devices using split learning. Using the abstraction provided by SplitEasy, developers can run various DL models under split learning setting by making minimal modifications. We provide a detailed explanation of SplitEasy and perform experiments with six state-of-the-art neural networks. We demonstrate how SplitEasy can train models that cannot be trained solely by a mobile device while incurring nearly constant time per data sample.