论文标题
FEDFMC:非IID数据的顺序有效联合学习
FedFMC: Sequential Efficient Federated Learning on Non-iid Data
论文作者
论文摘要
作为设备在不共享数据的情况下更新全局模型的一种机制,联邦学习桥接了数据需求与尊重隐私之间的张力。但是,经典的FL方法(例如联邦平均与非IID数据的斗争),这是现实世界中普遍存在的情况。先前的解决方案是最佳选择,因为它们要么采用少量的全球数据子集,要么使用增加通信成本的模型数量。我们提出了FedFMC(叉-Merge-Conolidate),该方法动态分叉设备更新不同的全局模型,然后将单独的模型合并并整合到一个模型中。我们首先在简单数据集上显示FEDFMC的健全性,然后运行几个实验与基线方法进行比较。这些实验表明,FEDFMC在联合学习环境中对非IID数据的早期方法显着改善,而无需使用全球共享数据子集或增加通信成本。
As a mechanism for devices to update a global model without sharing data, federated learning bridges the tension between the need for data and respect for privacy. However, classic FL methods like Federated Averaging struggle with non-iid data, a prevalent situation in the real world. Previous solutions are sub-optimal as they either employ a small shared global subset of data or greater number of models with increased communication costs. We propose FedFMC (Fork-Merge-Consolidate), a method that dynamically forks devices into updating different global models then merges and consolidates separate models into one. We first show the soundness of FedFMC on simple datasets, then run several experiments comparing against baseline approaches. These experiments show that FedFMC substantially improves upon earlier approaches to non-iid data in the federated learning context without using a globally shared subset of data nor increase communication costs.