论文标题

更快的随机块稀疏kaczmarz平均

Faster Randomized Block Sparse Kaczmarz by Averaging

论文作者

Tondji, Lionel, Lorenz, Dirk A

论文摘要

标准的随机稀疏kaczmarz(RSK)方法是一种计算方程线性系统的稀疏解决方案并使用顺序更新的算法,因此不利用并行计算。在这项工作中,我们基于平均几个Kaczmarz步骤引入了RSK的平行(迷你批次)版本。自然,这种方法允许并行化,我们表明它也可以利用大量的过度释放。我们证明了线性预期的收敛,并表明,鉴于可以利用并行计算,该方法比标准方法提供了更快的收敛性。该方法还可以看作是线性化的Bregman算法的变体,随机的双块坐标下降更新,随机镜下降更新或RSK的放松版本,当批次大小等于一个时,我们恢复了标准RSK方法。我们还提供了不一致的系统的估计值,并表明迭代在噪声级别的顺序中融合到错误。最后,数值示例说明了新算法的好处。

The standard randomized sparse Kaczmarz (RSK) method is an algorithm to compute sparse solutions of linear systems of equations and uses sequential updates, and thus, does not take advantage of parallel computations. In this work, we introduce a parallel (mini batch) version of RSK based on averaging several Kaczmarz steps. Naturally, this method allows for parallelization and we show that it can also leverage large over-relaxation. We prove linear expected convergence and show that, given that parallel computations can be exploited, the method provably provides faster convergence than the standard method. This method can also be viewed as a variant of the linearized Bregman algorithm, a randomized dual block coordinate descent update, a stochastic mirror descent update, or a relaxed version of RSK and we recover the standard RSK method when the batch size is equal to one. We also provide estimates for inconsistent systems and show that the iterates convergence to an error in the order of the noise level. Finally, numerical examples illustrate the benefits of the new algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源