论文标题
分布式预测校正ADMM用于时变凸优化
Distributed Prediction-Correction ADMM for Time-Varying Convex Optimization
论文作者
论文摘要
本文介绍了一种双重登记的ADMM方法,用于分布式,随时间变化的优化。所提出的算法是在预测校正框架中设计的,其中计算节点基于过去的观察结果预测了未来的本地成本,并利用此信息来更有效地解决时间变化的问题。为了保证算法的线性收敛,将正则化应用于双重,从而产生双重调查的ADMM。我们分析了时间变化算法的收敛属性,以及双重调查ADMM的正则化误差。数值结果表明,在随时间变化的设置中,尽管有正规化错误,但双重规范化的ADMM的性能可以优于基于不精确的方法,以及确切的双分解技术,就渐近误差和共识约束而言。
This paper introduces a dual-regularized ADMM approach to distributed, time-varying optimization. The proposed algorithm is designed in a prediction-correction framework, in which the computing nodes predict the future local costs based on past observations, and exploit this information to solve the time-varying problem more effectively. In order to guarantee linear convergence of the algorithm, a regularization is applied to the dual, yielding a dual-regularized ADMM. We analyze the convergence properties of the time-varying algorithm, as well as the regularization error of the dual-regularized ADMM. Numerical results show that in time-varying settings, despite the regularization error, the performance of the dual-regularized ADMM can outperform inexact gradient-based methods, as well as exact dual decomposition techniques, in terms of asymptotical error and consensus constraint violation.