论文标题
自适应近端算法框架和大规模优化的应用
An adaptive proximal point algorithm framework and application to large-scale optimization
论文作者
论文摘要
我们研究了近端算法(PPA)及其在误差结合条件下的不精确扩展,如果近端正则化参数大于误差界条件参数,该差额条件下的全局线性收敛。我们提出了一种自适应的广义近端算法(AGPPA),该算法可根据某些可实现的标准自适应地更新近端正则化参数。我们表明,AGPPA在不了解误差绑定条件参数的情况下实现线性收敛,而速率仅通过对数项与最佳的速率不同。我们将AGPPA应用于凸的最小化问题,并分析所得算法的迭代复杂性结合。我们的框架和复杂性结果适用于任意线性收敛的内部求解器,并允许使用任何局部快速收敛方法的混合体。我们通过将其应用于解决大规模线性编程(LP)问题来说明AGPPA的性能。所得的复杂性结合对霍夫曼常数的依赖性较弱,并且比线性化的ADMM更好地尺度。在数值实验中,我们的算法在获得大规模LP问题的培养基精度方面表现出改善的性能。
We investigate the proximal point algorithm (PPA) and its inexact extensions under an error bound condition, which guarantees a global linear convergence if the proximal regularization parameter is larger than the error bound condition parameter. We propose an adaptive generalized proximal point algorithm (AGPPA), which adaptively updates the proximal regularization parameters based on some implementable criteria. We show that AGPPA achieves linear convergence without any knowledge of the error bound condition parameter, and the rate only differs from the optimal one by a logarithm term. We apply AGPPA on convex minimization problem and analyze the iteration complexity bound of the resulting algorithm. Our framework and the complexity results apply to arbitrary linearly convergent inner solver and allows a hybrid with any locally fast convergent method. We illustrate the performance of AGPPA by applying it to solve large-scale linear programming (LP) problem. The resulting complexity bound has a weaker dependence on the Hoffman constant and scales with the dimension better than linearized ADMM. In numerical experiments, our algorithm demonstrates improved performance in obtaining solution of medium accuracy on large-scale LP problem.