论文标题
在接近线性的时间内培训(过多论术)神经网络
Training (Overparametrized) Neural Networks in Near-Linear Time
论文作者
论文摘要
The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster $\mathit{second}$-$\mathit{order}$ optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate ($\mathit{independent}$ of the training batch size $n$), second-order algorithms incur a daunting slowdown in the $\mathit{cost}$ $\mathit{per}$ $\mathit{iteration}$ (inverting the Hessian matrix of the loss function), which renders them impractical.最近,[ZMG19,CGH+19}的工作减轻了该计算开销,得出了$ O(Mn^2)$ - 时间二阶算法,用于训练两层型多项式宽度$ m $ M $的两层过度透明神经网络。 我们展示了如何加快[CGH+19]的算法,实现了用于培训的$ \ tilde {o}(mn)$ - 时间反向传播算法(轻度过度透明)relu Networks,该网络在dimention($ MN $)上是完整的($ MN $)的近乎线性(Jacobian)一级。我们算法的核心是将高斯 - 纽顿迭代重新整理为$ \ ell_2 $ - 回归问题,然后使用快速Jl类型的维度缩小为$ \ sathit {先前{先验条件} $,$ m $的基础gram矩阵可以独立于$ martiat $ \ nter $ \ mathiit $ \ mathiit $} $} $} $} $}共轭梯度。我们的结果提供了一种概念验证,即从随机线性代数的高级机械(导致$ \ Mathit {convex} $ $ $ \ Mathit {optimization} $(ERM,LPS,LPS,回归)中的最新突破 - 也可以延续到深度学习的领域。
The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster $\mathit{second}$-$\mathit{order}$ optimization algorithms beyond SGD, without compromising the generalization error. Despite their remarkable convergence rate ($\mathit{independent}$ of the training batch size $n$), second-order algorithms incur a daunting slowdown in the $\mathit{cost}$ $\mathit{per}$ $\mathit{iteration}$ (inverting the Hessian matrix of the loss function), which renders them impractical. Very recently, this computational overhead was mitigated by the works of [ZMG19,CGH+19}, yielding an $O(mn^2)$-time second-order algorithm for training two-layer overparametrized neural networks of polynomial width $m$. We show how to speed up the algorithm of [CGH+19], achieving an $\tilde{O}(mn)$-time backpropagation algorithm for training (mildly overparametrized) ReLU networks, which is near-linear in the dimension ($mn$) of the full gradient (Jacobian) matrix. The centerpiece of our algorithm is to reformulate the Gauss-Newton iteration as an $\ell_2$-regression problem, and then use a Fast-JL type dimension reduction to $\mathit{precondition}$ the underlying Gram matrix in time independent of $M$, allowing to find a sufficiently good approximate solution via $\mathit{first}$-$\mathit{order}$ conjugate gradient. Our result provides a proof-of-concept that advanced machinery from randomized linear algebra -- which led to recent breakthroughs in $\mathit{convex}$ $\mathit{optimization}$ (ERM, LPs, Regression) -- can be carried over to the realm of deep learning as well.