论文标题

减少使用重量更新幅度联合学习中系统异质性的影响

Reducing Impacts of System Heterogeneity in Federated Learning using Weight Update Magnitudes

论文作者

Wang, Irene

论文摘要

手持设备的广泛采用促进了新应用的快速增长。这些新应用程序中有几个采用机器学习模型来培训通常是私人且敏感的用户数据。联合学习使机器学习模型可以在每个手持设备上本地训练,同时仅将其神经元更新与服务器同步。虽然这使用户隐私,但技术扩展和软件的进步导致了具有不同性能功能的手持设备。这导致了联合学习任务的培训时间,该任务由一些低表现的Straggler设备决定,从本质上讲是整个培训过程的瓶颈。在这项工作中,我们旨在通过基于他们的性能和准确性反馈来动态形成散乱者的子模型来减轻联邦学习的绩效瓶颈。为此,我们提供了不变的辍学,这是一种动态技术,该技术基于神经元更新阈值形成子模型。不变的辍学使用来自非straggler客户端的神经元更新,在每次训练期间为每个Straggler开发一个量身定制的子模型。对于迭代的所有相应的权重均低于阈值。我们使用五个现实世界移动客户端评估了不变的辍学。我们的评估表明,不变的辍学获得比最新有序辍学的最大准确性增益,同时减轻了散乱者的性能瓶颈。

The widespread adoption of handheld devices have fueled rapid growth in new applications. Several of these new applications employ machine learning models to train on user data that is typically private and sensitive. Federated Learning enables machine learning models to train locally on each handheld device while only synchronizing their neuron updates with a server. While this enables user privacy, technology scaling and software advancements have resulted in handheld devices with varying performance capabilities. This results in the training time of federated learning tasks to be dictated by a few low-performance straggler devices, essentially becoming a bottleneck to the entire training process. In this work, we aim to mitigate the performance bottleneck of federated learning by dynamically forming sub-models for stragglers based on their performance and accuracy feedback. To this end, we offer the Invariant Dropout, a dynamic technique that forms a sub-model based on the neuron update threshold. Invariant Dropout uses neuron updates from the non-straggler clients to develop a tailored sub-models for each straggler during each training iteration. All corresponding weights which have a magnitude less than the threshold are dropped for the iteration. We evaluate Invariant Dropout using five real-world mobile clients. Our evaluations show that Invariant Dropout obtains a maximum accuracy gain of 1.4% points over state-of-the-art Ordered Dropout while mitigating performance bottlenecks of stragglers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源