论文标题

最大化全球模型的吸引力

Maximizing Global Model Appeal in Federated Learning

论文作者

Cho, Yae Jee, Jhunjhunwala, Divyansh, Li, Tian, Smith, Virginia, Joshi, Gauri

论文摘要

联合学习通常会考虑使用Edge客户端使用本地数据进行协作培训全球模型。客户可能有自己的个人要求,例如拥有最小的培训损失门槛,他们希望通过全球模型满足。但是,由于客户端的异质性,全球模型可能无法满足每个客户的要求,并且只有一个小子集可以发现全球模型吸引人。在这项工作中,我们探讨了由于无法满足当地需求而无法吸引客户的全球模型问题。我们提出了Maxfl,旨在最大程度地提高吸引全球模型的客户数量。我们表明,拥有高度的全球模型吸引力对于维持足够的客户培训非常重要,并且可以直接提高对可见和看不见的客户的测试准确性。我们为MAXFL提供了收敛保证,并表明MaxFL获得了$ 22 $ - $ 40 \%$和$ 50 $ - $ 50 \%$ $ $ $ $ $ $ $,分别为培训客户和看不见的客户提高了测试准确性,与广泛的FL建模方法相比,包括应对数据异质性的广泛模型方法,旨在激发客户端的客户,并学习个性化或公平的模型。

Federated learning typically considers collaboratively training a global model using local data at edge clients. Clients may have their own individual requirements, such as having a minimal training loss threshold, which they expect to be met by the global model. However, due to client heterogeneity, the global model may not meet each client's requirements, and only a small subset may find the global model appealing. In this work, we explore the problem of the global model lacking appeal to the clients due to not being able to satisfy local requirements. We propose MaxFL, which aims to maximize the number of clients that find the global model appealing. We show that having a high global model appeal is important to maintain an adequate pool of clients for training, and can directly improve the test accuracy on both seen and unseen clients. We provide convergence guarantees for MaxFL and show that MaxFL achieves a $22$-$40\%$ and $18$-$50\%$ test accuracy improvement for the training clients and unseen clients respectively, compared to a wide range of FL modeling approaches, including those that tackle data heterogeneity, aim to incentivize clients, and learn personalized or fair models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源