论文标题
反对联邦学习中拜占庭中毒攻击的动态防御
Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning
论文作者
论文摘要
联合学习是在不访问培训数据的情况下在本地设备上进行培训的分布式学习,因此很容易受到拜扎汀中毒对抗性攻击的影响。我们认为,联合学习模型必须通过通过联合聚合操作员滤除对抗性客户端来避免这种对抗性攻击。我们提出了一个动态的联合聚合操作员,该操作员会动态地丢弃这些对抗性客户,并允许防止全球学习模型的损坏。我们将其评估为对对抗性攻击的防御,以在联邦学习者,时尚MNIST和CIFAR-10图像数据集中在联合学习设置中部署深度学习分类模型。结果表明,汇总的客户的动态选择增强了全球学习模型的性能,并丢弃了对抗性和穷人(具有低质量的模型)客户的表现。
Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial and poor (with low quality models) clients.