论文标题

具有不同迭代计数的神经模型的联合学习

Federated Learning of Neural ODE Models with Different Iteration Counts

论文作者

Hoshino, Yuto, Kawakami, Hiroki, Matsutani, Hiroki

论文摘要

联合学习是一种分布式的机器学习方法,在该方法中,客户在本地使用自己的数据训练模型并将其上传到服务器,以便在没有将原始数据上传到服务器的情况下共享训练的结果。联邦学习有一些挑战,例如降低沟通规模和客户异质性。前者可以减轻通信开销,后者可以使客户根据其可用的计算资源选择适当的模型。为了应对这些挑战,在本文中,我们利用基于神经ODE的模型进行联合学习。提出的灵活联合学习方法可以减少沟通规模,同时以不同的迭代计数或深度汇总模型。我们的贡献是,我们通过实验表明,所提出的联合学习可以用不同的迭代计数或深度汇总模型。就准确性而言,它与不同的联合学习方法进行了比较。此外,我们证明,与使用CIFAR-10数据集的基线Resnet模型相比,我们的方法可以将通信大小降低多达92.4%。

Federated learning is a distributed machine learning approach in which clients train models locally with their own data and upload them to a server so that their trained results are shared between them without uploading raw data to the server. There are some challenges in federated learning, such as communication size reduction and client heterogeneity. The former can mitigate the communication overheads, and the latter can allow the clients to choose proper models depending on their available compute resources. To address these challenges, in this paper, we utilize Neural ODE based models for federated learning. The proposed flexible federated learning approach can reduce the communication size while aggregating models with different iteration counts or depths. Our contribution is that we experimentally demonstrate that the proposed federated learning can aggregate models with different iteration counts or depths. It is compared with a different federated learning approach in terms of the accuracy. Furthermore, we show that our approach can reduce communication size by up to 92.4% compared with a baseline ResNet model using CIFAR-10 dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源