论文标题
分布式随机组成优化问题在有向网络上
Distributed Stochastic Compositional Optimization Problems over Directed Networks
论文作者
论文摘要
我们研究了有指示通信网络上的分布式随机组成优化问题,在该网络中,代理商私下拥有随机组成目标功能,并协作以最大程度地减少所有目标函数的总和。我们提出了一种分布式随机成分梯度下降法,其中采用梯度跟踪和随机校正技术来适应网络的定向结构并提高内部功能估计的准确性。当目标函数平稳时,提出的方法达到了收敛速率$ \ MATHCAL {o} \ left(k^{ - 1/2} \ right)$和样品复杂性$ \ MATHCAL {o} \ left(\ frac {1} {1} {1} {ε^2}} {ε^2} \ right)当目标函数强烈凸出时,将收敛率提高到$ \ MATHCAL {O} \ left(k^{ - 1} \ right)$。此外,还提出了Polyak-Ruppert的平均迭代次数的渐近正态性。我们证明了对模型 - 敏捷元学习问题和逻辑回归问题的拟议方法的经验性能。
We study the distributed stochastic compositional optimization problems over directed communication networks in which agents privately own a stochastic compositional objective function and collaborate to minimize the sum of all objective functions. We propose a distributed stochastic compositional gradient descent method, where the gradient tracking and the stochastic correction techniques are employed to adapt to the networks' directed structure and increase the accuracy of inner function estimation. When the objective function is smooth, the proposed method achieves the convergence rate $\mathcal{O}\left(k^{-1/2}\right)$ and sample complexity $\mathcal{O}\left(\frac{1}{ε^2}\right)$ for finding the ($ε$)-stationary point. When the objective function is strongly convex, the convergence rate is improved to $\mathcal{O}\left(k^{-1}\right)$. Moreover, the asymptotic normality of Polyak-Ruppert averaged iterates of the proposed method is also presented. We demonstrate the empirical performance of the proposed method on model-agnostic meta-learning problem and logistic regression problem.