论文标题

关于异质联邦学习与任意自适应在线模型修剪的融合

On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning

论文作者

Zhou, Hanhan, Lan, Tian, Venkataramani, Guru, Ding, Wenbo

论文摘要

联合学习(FL)的最大挑战之一是,客户设备通常具有截然不同的计算和通信资源,用于本地更新。为此,最近的研究工作重点是培训通过修剪共同的全球模型获得的异质本地模型。尽管经验成功,但理论上的融合保证仍然是一个悬而未决的问题。在本文中,我们提出了一个具有{\ em nutivary}自适应在线模型修剪的异质FL算法的统一框架,并提供一般的收敛分析。特别是,我们证明,在某些足够的条件下以及在IID和非IID数据上,这些算法会收敛到一般平滑成本功能的标准FL的固定点,收敛速率为$ O(\ frac {1} {\ sqrt {q}}}})$。此外,我们阐明了影响融合的两个关键因素:修剪引起的噪声和最小覆盖率指数,主张局部修剪口罩的联合设计以进行有效的训练。

One of the biggest challenges in Federated Learning (FL) is that client devices often have drastically different computation and communication resources for local updates. To this end, recent research efforts have focused on training heterogeneous local models obtained by pruning a shared global model. Despite empirical success, theoretical guarantees on convergence remain an open question. In this paper, we present a unifying framework for heterogeneous FL algorithms with {\em arbitrary} adaptive online model pruning and provide a general convergence analysis. In particular, we prove that under certain sufficient conditions and on both IID and non-IID data, these algorithms converges to a stationary point of standard FL for general smooth cost functions, with a convergence rate of $O(\frac{1}{\sqrt{Q}})$. Moreover, we illuminate two key factors impacting convergence: pruning-induced noise and minimum coverage index, advocating a joint design of local pruning masks for efficient training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源