论文标题

在梯度攻击下安全分布式优化

Secure Distributed Optimization Under Gradient Attacks

论文作者

Yu, Shuhua, Kar, Soummya

论文摘要

在本文中,我们研究了针对多代理网络中任意梯度攻击的安全分布优化。在分布式优化中,没有中央服务器可以协调本地更新,并且每个代理只能在预定义的网络上与其邻居进行通信。我们考虑以下情况,在$ n $网络代理中,固定但未知的分数$ρ$的代理商处于任意梯度攻击中,因为它们的随机梯度甲壳返回任意信息以使优化过程脱轨,并且目标是最大程度地减少未受害代理商对本地目标功能的总和。我们提出了一种结合局部方差降低和剪接的分布式随机梯度方法(clip-vrg)。我们表明,在连接的网络中,当未攻击的局部目标函数是凸并且平滑时,共享一个共同的最小化器,它们的总和强烈凸出时,clip-vrg几乎可以确保迭代的收敛到所有代理的确切成本最小化器。我们在问题参数方面量化了攻击药物的分数$ρ$的紧密上限,例如,相关总成本的条件数,可以保证clip-vrg的准确收敛,并表征其渐近收敛率。最后,我们凭经验证明了在合成数据集和图像分类数据集中梯度攻击下所提出的方法的有效性。

In this paper, we study secure distributed optimization against arbitrary gradient attack in multi-agent networks. In distributed optimization, there is no central server to coordinate local updates, and each agent can only communicate with its neighbors on a predefined network. We consider the scenario where out of $n$ networked agents, a fixed but unknown fraction $ρ$ of the agents are under arbitrary gradient attack in that their stochastic gradient oracles return arbitrary information to derail the optimization process, and the goal is to minimize the sum of local objective functions on unattacked agents. We propose a distributed stochastic gradient method that combines local variance reduction and clipping (CLIP-VRG). We show that, in a connected network, when unattacked local objective functions are convex and smooth, share a common minimizer, and their sum is strongly convex, CLIP-VRG leads to almost sure convergence of the iterates to the exact sum cost minimizer at all agents. We quantify a tight upper bound of the fraction $ρ$ of attacked agents in terms of problem parameters such as the condition number of the associated sum cost that guarantee exact convergence of CLIP-VRG, and characterize its asymptotic convergence rate. Finally, we empirically demonstrate the effectiveness of the proposed method under gradient attacks in both synthetic dataset and image classification datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源