论文标题

个性化联合学习,具有有关个性化事先的隐藏信息

Personalized Federated Learning with Hidden Information on Personalized Prior

论文作者

Shi, Mingjia, Zhou, Yuhao, Ye, Qing, Lv, Jiancheng

论文摘要

联合学习(用于简化的FL)是一种分布式机器学习技术,它利用全球服务器和协作客户来实现无直接数据共享的隐私保护全球模型培训。但是,作为FL的主要问题之一,异质数据问题使全球模型很难在每个客户的本地数据上有效执行。因此,个性化联合学习(用于简化的PFL)旨在尽可能地在本地数据上提高模型的性能。贝叶斯学习,其中模型的参数被视为具有先验假设的随机变量,这是对异质数据问题的可行解决方案,因为趋势是模型使用的局部数据越多,它越集中在本地数据上,否则侧重于先验。当将贝叶斯学习应用于PFL时,全球模型将提供全球知识,作为本地培训过程之前的。在本文中,我们采用贝叶斯学会通过假设规模指数族中的先验来建模PFL,因此提出了Pfedbred,这是一个框架来解决我们使用Bregman Divergencence正则化建模的问题。从经验上讲,我们的实验表明,在球形高斯的先前假设和平均选择的一阶策略下,我们的提议在多个公共基准上大大胜过其他PFL算法。

Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing. However, heterogeneous data problem, as one of FL's main problems, makes it difficult for the global model to perform effectively on each client's local data. Thus, personalized federated learning (PFL for simplification) aims to improve the performance of the model on local data as much as possible. Bayesian learning, where the parameters of the model are seen as random variables with a prior assumption, is a feasible solution to the heterogeneous data problem due to the tendency that the more local data the model use, the more it focuses on the local data, otherwise focuses on the prior. When Bayesian learning is applied to PFL, the global model provides global knowledge as a prior to the local training process. In this paper, we employ Bayesian learning to model PFL by assuming a prior in the scaled exponential family, and therefore propose pFedBreD, a framework to solve the problem we model using Bregman divergence regularization. Empirically, our experiments show that, under the prior assumption of the spherical Gaussian and the first order strategy of mean selection, our proposal significantly outcompetes other PFL algorithms on multiple public benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源