论文标题
通过部分模型个性化的联合学习
Federated Learning with Partial Model Personalization
论文作者
论文摘要
我们考虑两种用于培训部分个性化模型的联合学习算法,共享和个人参数在设备上同时或交替更新。两种算法均已在文献中提出,但是它们的收敛属性尚未完全理解,尤其是对于交替的变体而言。我们通过部分参与提供了一般非coNVEX设置中这两种算法的收敛分析,并描述了一个算法的依据。我们在现实世界中的图像,文本和语音数据集上的实验表明,(a)部分个性化可以通过一小部分个人参数获得完整模型个性化的大部分好处,并且(b)交替的更新算法通常超过了同时更新算法的交替更新算法,但较小但一致。
We consider two federated learning algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on the devices. Both algorithms have been proposed in the literature, but their convergence properties are not fully understood, especially for the alternating variant. We provide convergence analyses of both algorithms in the general nonconvex setting with partial participation and delineate the regime where one dominates the other. Our experiments on real-world image, text, and speech datasets demonstrate that (a) partial personalization can obtain most of the benefits of full model personalization with a small fraction of personal parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm by a small but consistent margin.