论文标题
单次演员 - 批评人士证明是全球最佳政策
Single-Timescale Actor-Critic Provably Finds Globally Optimal Policy
论文作者
论文摘要
我们研究Actor-Critic的全球融合和全球最优性,这是最受欢迎的强化学习算法系列之一。尽管大多数现有关于Actor-Critic的作品都采用双级或两个时刻的更新,但我们专注于更实用的单时间尺度设置,在该设置中,演员和评论家同时更新。具体而言,在每次迭代中,评论家更新是通过仅应用Bellman评估操作员在使用评论家计算的策略梯度方向更新时获得的。此外,我们考虑了两个函数近似设置,其中演员和评论家都由线性或深神经网络代表。在这两种情况下,我们都证明,演员序列以sublinear $ o(k^{ - 1/2})$ rate的趋势收敛到全球最佳策略,其中$ k $是迭代的数量。据我们所知,我们首次建立了具有线性函数近似的单次演员 - 批评者的收敛速度和全局最优性。此外,在具有非线性函数近似的策略优化范围更广泛的范围下,我们证明,具有深层神经网络的参与者 - 批评者首次以均方根率找到了全球最佳政策。
We study the global convergence and global optimality of actor-critic, one of the most popular families of reinforcement learning algorithms. While most existing works on actor-critic employ bi-level or two-timescale updates, we focus on the more practical single-timescale setting, where the actor and critic are updated simultaneously. Specifically, in each iteration, the critic update is obtained by applying the Bellman evaluation operator only once while the actor is updated in the policy gradient direction computed using the critic. Moreover, we consider two function approximation settings where both the actor and critic are represented by linear or deep neural networks. For both cases, we prove that the actor sequence converges to a globally optimal policy at a sublinear $O(K^{-1/2})$ rate, where $K$ is the number of iterations. To the best of our knowledge, we establish the rate of convergence and global optimality of single-timescale actor-critic with linear function approximation for the first time. Moreover, under the broader scope of policy optimization with nonlinear function approximation, we prove that actor-critic with deep neural network finds the globally optimal policy at a sublinear rate for the first time.