论文标题

等级公平的联合学习

Hierarchically Fair Federated Learning

论文作者

Zhang, Jingfeng, Li, Cheng, Robles-Kelly, Antonio, Kankanhalli, Mohan

论文摘要

当联合学习在具有孤立的数据集的竞争代理之间采用时,代理人会自我利益,并且只有在获得相当奖励的情况下才会参与。为了鼓励联合学习的应用,本文采用了管理策略,即更多的贡献应带来更多的回报。我们提出了一个新颖的等级公平联合学习(HFFL)框架。在此框架下,代理人根据其预先谈判的贡献水平成比例地奖励。 HFFL+将其扩展到合并异质模型。在几个数据集上进行的理论分析和经验评估证实了我们框架在维护公平性方面的功效,从而促进了在竞争环境中联合学习的功效。

When the federated learning is adopted among competitive agents with siloed datasets, agents are self-interested and participate only if they are fairly rewarded. To encourage the application of federated learning, this paper employs a management strategy, i.e., more contributions should lead to more rewards. We propose a novel hierarchically fair federated learning (HFFL) framework. Under this framework, agents are rewarded in proportion to their pre-negotiated contribution levels. HFFL+ extends this to incorporate heterogeneous models. Theoretical analysis and empirical evaluation on several datasets confirm the efficacy of our frameworks in upholding fairness and thus facilitating federated learning in the competitive settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源