论文标题
FEDCAT:通过设备串联进行准确的联合学习
FedCAT: Towards Accurate Federated Learning via Device Concatenation
论文作者
论文摘要
作为一个有前途的分布式机器学习范式,联合学习(FL)使所有相关设备都可以协作训练全球模型,而无需公开其本地数据隐私。但是,对于非IID方案,由于数据异质性引起的重量差异,FL模型的分类精度大大降低。尽管已经研究了各种FL变体以提高模型的准确性,但大多数仍遭受了不可忽略的通信和计算开销问题的困扰。在本文中,我们介绍了一种名为Fed-cat的新型FL方法,该方法可以根据我们提出的设备选择策略和基于设备串联的本地训练方法来实现高模型精度。与汇总在各个设备上训练的本地模型的常规FL方法不同,FedCat通过一系列逻辑串联设备定期聚集了本地模型,这可以有效地减轻模型权重差异问题。四个知名基准的全面实验结果表明,我们的方法可以显着提高最先进的FL方法的模型准确性,而不会引起额外的沟通开销。
As a promising distributed machine learning paradigm, Federated Learning (FL) enables all the involved devices to train a global model collaboratively without exposing their local data privacy. However, for non-IID scenarios, the classification accuracy of FL models decreases drastically due to the weight divergence caused by data heterogeneity. Although various FL variants have been studied to improve model accuracy, most of them still suffer from the problem of non-negligible communication and computation overhead. In this paper, we introduce a novel FL approach named Fed-Cat that can achieve high model accuracy based on our proposed device selection strategy and device concatenation-based local training method. Unlike conventional FL methods that aggregate local models trained on individual devices, FedCat periodically aggregates local models after their traversals through a series of logically concatenated devices, which can effectively alleviate the model weight divergence problem. Comprehensive experimental results on four well-known benchmarks show that our approach can significantly improve the model accuracy of state-of-the-art FL methods without causing extra communication overhead.