论文标题
通过知识提取和多模型融合增强异质联合学习
Enhancing Heterogeneous Federated Learning with Knowledge Extraction and Multi-Model Fusion
论文作者
论文摘要
与用户数据隐私有关,本文提出了一种新的联合学习方法(FL)方法,该方法在不访问敏感数据的情况下在边缘设备上训练机器学习模型。传统的FL方法虽然隐私保护,但由于依赖聚合方法,因此无法管理模型异质性和高度沟通成本。为了解决这一限制,我们提出了一种资源感知的FL方法,该方法将局部知识从边缘模型汇总,并通过知识蒸馏将其提炼成强大的全球知识。这种方法允许有效的多模型知识融合和资源感知模型的部署,同时保留模型异质性。与现有的FL算法相比,我们的方法改善了异质数据和模型的沟通成本和性能。值得注意的是,在提供卓越的性能的同时,它最多将Resnet-32的通信成本降低了50 \%,VGG-11降低了10 $ \ times $。
Concerned with user data privacy, this paper presents a new federated learning (FL) method that trains machine learning models on edge devices without accessing sensitive data. Traditional FL methods, although privacy-protective, fail to manage model heterogeneity and incur high communication costs due to their reliance on aggregation methods. To address this limitation, we propose a resource-aware FL method that aggregates local knowledge from edge models and distills it into robust global knowledge through knowledge distillation. This method allows efficient multi-model knowledge fusion and the deployment of resource-aware models while preserving model heterogeneity. Our method improves communication cost and performance in heterogeneous data and models compared to existing FL algorithms. Notably, it reduces the communication cost of ResNet-32 by up to 50\% and VGG-11 by up to 10$\times$ while delivering superior performance.