论文标题
effgan:微调联合剂的合奏
EFFGAN: Ensembles of fine-tuned federated GANs
论文作者
论文摘要
事实证明,生成的对抗网络是学习复杂且高维数据分布的强大工具,但是已证明诸如模式崩溃之类的问题使得很难训练它们。当数据在联合学习设置中分散到几个客户时,这是一个更困难的问题,因为诸如客户端漂移和非IID数据之类的问题使联盟的平均平均值很难收敛。 在这项工作中,我们研究了如何在培训数据对客户异质分散时学习数据分布的任务,无法共享。我们的目标是从中央分发中进行采样,而数据永远不会离开客户。我们使用标准基准图像数据集显示,现有方法在这种设置中失败,当局部时期的局部数量变大时,会经历所谓的客户漂移。因此,我们提出了一种新颖的方法,我们称为Effgan:微调联邦甘斯的合奏。作为当地专家生成器的合奏,Effgan能够学习所有客户端的数据分布并减轻客户漂移。它能够用大量当地时代的训练,使其比以前的作品更有效。
Generative adversarial networks have proven to be a powerful tool for learning complex and high-dimensional data distributions, but issues such as mode collapse have been shown to make it difficult to train them. This is an even harder problem when the data is decentralized over several clients in a federated learning setup, as problems such as client drift and non-iid data make it hard for federated averaging to converge. In this work, we study the task of how to learn a data distribution when training data is heterogeneously decentralized over clients and cannot be shared. Our goal is to sample from this distribution centrally, while the data never leaves the clients. We show using standard benchmark image datasets that existing approaches fail in this setting, experiencing so-called client drift when the local number of epochs becomes to large. We thus propose a novel approach we call EFFGAN: Ensembles of fine-tuned federated GANs. Being an ensemble of local expert generators, EFFGAN is able to learn the data distribution over all clients and mitigate client drift. It is able to train with a large number of local epochs, making it more communication efficient than previous works.