论文标题
何时会产生对抗性模仿学习算法达到全球融合
When Will Generative Adversarial Imitation Learning Algorithms Attain Global Convergence
论文作者
论文摘要
生成对抗性模仿学习(GAIL)是一种流行的反钢筋学习方法,可共同优化专家轨迹的政策和奖励。关于盖尔的一个主要问题是,将某种政策梯度算法应用于盖尔是否达到全球最小化器(即产生专家政策),现有的理解非常有限。仅针对线性(或线性类型)MDP和线性(或可线化)奖励显示了这种全局收敛。在本文中,我们在一般MDP和非线性奖励函数类别下研究GAIL(只要目标函数在奖励参数方面强烈倾斜)。我们在各种常用的策略梯度算法中表征了全球融合,所有这些算法都以随机梯度上升的方式交替实施,以进行奖励更新,包括预计的策略梯度(PPG) - gail(ppg) - 弗兰克 - 沃尔夫·沃尔夫(Frank-Wolfe),弗兰克 - 沃尔夫(Frank-Wolfe)策略梯度(FWPG),信任区域策略级(TRPO)(trpo)(trpo) - 自然级别(trpo)(trpoL)-G-gail和n自然(n自然)-G-GAIL和np(np)和NP(np) - np和np(np)和np(np and) - np和n。这是Gail全球收敛的首次系统理论研究。
Generative adversarial imitation learning (GAIL) is a popular inverse reinforcement learning approach for jointly optimizing policy and reward from expert trajectories. A primary question about GAIL is whether applying a certain policy gradient algorithm to GAIL attains a global minimizer (i.e., yields the expert policy), for which existing understanding is very limited. Such global convergence has been shown only for the linear (or linear-type) MDP and linear (or linearizable) reward. In this paper, we study GAIL under general MDP and for nonlinear reward function classes (as long as the objective function is strongly concave with respect to the reward parameter). We characterize the global convergence with a sublinear rate for a broad range of commonly used policy gradient algorithms, all of which are implemented in an alternating manner with stochastic gradient ascent for reward update, including projected policy gradient (PPG)-GAIL, Frank-Wolfe policy gradient (FWPG)-GAIL, trust region policy optimization (TRPO)-GAIL and natural policy gradient (NPG)-GAIL. This is the first systematic theoretical study of GAIL for global convergence.