论文标题

关于基于反射的目标的预测梯度下降的收敛速率

On the Convergence Rate of Projected Gradient Descent for a Back-Projection based Objective

论文作者

Tirer, Tom, Giryes, Raja

论文摘要

不稳定的线性逆问题出现在许多科学设置中,通常通过解决由数据保真度和先前术语组成的优化问题来解决。最近,一些作品将基于反向的忠实术语视为替代常见最小二乘(LS)的替代品,并为流行的反问题显示出了很好的结果。这些作品还从经验上表明,使用BP项而不是LS项需要更少的优化算法迭代。在本文中,我们研究了BP目标的投影梯度下降(PGD)算法的收敛速率。我们的分析允许与使用LS目标相比,可以确定其更快收敛的固有来源,同时仅做出轻度的假设。我们还分析了在松弛的收缩条件下,在先前的近端映射下,分析了更通用的近端梯度方法。该分析进一步强调了当线性测量算子条件不佳时,BP的优势。 $ \ ell_1 $ -norm和基于GAN的PRIOR的数值实验证实了我们的理论结果。

Ill-posed linear inverse problems appear in many scientific setups, and are typically addressed by solving optimization problems, which are composed of data fidelity and prior terms. Recently, several works have considered a back-projection (BP) based fidelity term as an alternative to the common least squares (LS), and demonstrated excellent results for popular inverse problems. These works have also empirically shown that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms. In this paper, we examine the convergence rate of the projected gradient descent (PGD) algorithm for the BP objective. Our analysis allows to identify an inherent source for its faster convergence compared to using the LS objective, while making only mild assumptions. We also analyze the more general proximal gradient method under a relaxed contraction condition on the proximal mapping of the prior. This analysis further highlights the advantage of BP when the linear measurement operator is badly conditioned. Numerical experiments with both $\ell_1$-norm and GAN-based priors corroborate our theoretical results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源