论文标题
加速梯度方法具有强大收敛到最小规范最小化器:一种动态方法,结合了时间缩放,平均和Tikhonov正则化
Accelerated gradient methods with strong convergence to the minimum norm minimizer: a dynamic approach combining time scaling, averaging, and Tikhonov regularization
论文作者
论文摘要
在Hilbert框架中,为了凸出优化,我们考虑通过将时间缩放和平均技术与Tikhonov正则化相结合而获得的加速梯度方法。我们从连续的陡峭下降动态开始,其系数渐近地消失。我们提供了对这一一阶演化方程的广泛分析。然后,我们将这种动态应用于Attouch,Bot和Nguyen最近引入的时间缩放和平均方法。因此,我们获得了一种惯性动力,涉及与Nesterov方法,隐式Hessian阻尼和Tikhonov正则化相关的粘性阻尼。在适当的参数设置下,只需使用Jensen的不平等,而无需进行其他Lyapunov分析,我们表明该轨迹同时具有几种显着的特性:它们提供了值的快速收敛,梯度的快速收敛到零,以及最小值最小的最小范围。这些结果完成并改善了作者获得的先前结果。
In a Hilbert framework, for convex differentiable optimization, we consider accelerated gradient methods obtained by combining temporal scaling and averaging techniques with Tikhonov regularization. We start from the continuous steepest descent dynamic with an additional Tikhonov regularization term whose coefficient vanishes asymptotically. We provide an extensive Lyapunov analysis of this first-order evolution equation. Then we apply to this dynamic the method of time scaling and averaging recently introduced by Attouch, Bot and Nguyen. We thus obtain an inertial dynamic which involves viscous damping associated with Nesterov's method, implicit Hessian damping and Tikhonov regularization. Under an appropriate setting of the parameters, just using Jensen's inequality, without the need for another Lyapunov analysis, we show that the trajectories have at the same time several remarkable properties: they provide a rapid convergence of values, fast convergence of the gradients to zero, and strong convergence to the minimum norm minimizer. These results complete and improve the previous results obtained by the authors.