论文标题
华夫饼:联合学习的重量匿名分解
WAFFLe: Weight Anonymized Factorization for Federated Learning
论文作者
论文摘要
在数据敏感或私有的域中,可以以分布式方式学习的方法有很大的价值,而无需数据离开本地设备。鉴于这种需求,联邦学习已成为一种流行的培训范式。但是,许多联合学习方法贸易传输数据用于通信每个本地设备的更新权重参数。因此,成功的违规行为否则会直接损害数据,而是授予白框对本地模型的访问,这为许多攻击打开了大门,包括暴露了联合学习的非常数据的寻求保护。此外,在分布式场景中,单个客户端设备通常表现出较高的统计异质性。许多常见的联邦方法学习一个单一的全球模型。虽然这平均可以很好地表现,但是当I.I.D.违反了假设,使个人远离均值,并提出公平问题。为了解决这些问题,我们提出了对联邦学习(华夫饼)的体重匿名分解,这种方法将印度自助餐过程与神经网络的重量因子的共同词典结合在一起。关于MNIST,FashionMnist和CIFAR-10的实验表明,Waffle对本地测试性能和公平性的显着改善,同时提供了额外的安全性。
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting data for communicating updated weight parameters for each local device. Therefore, a successful breach that would have otherwise directly compromised the data instead grants whitebox access to the local model, which opens the door to a number of attacks, including exposing the very data federated learning seeks to protect. Additionally, in distributed scenarios, individual client devices commonly exhibit high statistical heterogeneity. Many common federated approaches learn a single global model; while this may do well on average, performance degrades when the i.i.d. assumption is violated, underfitting individuals further from the mean, and raising questions of fairness. To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks. Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.