论文标题
具有混合连续/离散变量的MDP的政策学习:关于Markovian跳跃系统的无模型控制的案例研究
Policy Learning of MDPs with Mixed Continuous/Discrete Variables: A Case Study on Model-Free Control of Markovian Jump Systems
论文作者
论文摘要
Markovian跳跃线性系统(MJL)是在许多控制应用中出现的重要一类动力系统。在本文中,我们介绍了控制未知(离散时间)MJL的问题,作为基于政策的强化学习马尔可夫决策过程(MDP)的新基准,并具有混合连续/离散状态变量。与传统的线性二次调节器(LQR)相比,我们提出的问题导致了特殊的混合MDP(具有混合连续和离散变量),并且由于基础马尔可夫跳跃参数的出现而构成了重大的新挑战,该参数控制了系统动力学的模式。具体而言,MJLS的状态不会形成马尔可夫链,因此无法将MJLS控制问题作为具有仅连续状态变量的MDP。但是,可以增强状态和跳跃参数,以获得具有混合连续/离散状态空间的MDP。我们讨论控制理论如何阐明这种混合MDP的策略参数化。然后,我们修改了广泛使用的自然策略梯度方法,以直接学习MJL的最佳状态反馈控制策略,而无需识别系统动力学或开关参数的过渡概率。我们在不同的MJLS示例上实施(数据驱动的)自然政策梯度方法。我们的仿真结果表明,自然梯度方法可以有效地学习具有未知动力学的MJL的最佳控制器。
Markovian jump linear systems (MJLS) are an important class of dynamical systems that arise in many control applications. In this paper, we introduce the problem of controlling unknown (discrete-time) MJLS as a new benchmark for policy-based reinforcement learning of Markov decision processes (MDPs) with mixed continuous/discrete state variables. Compared with the traditional linear quadratic regulator (LQR), our proposed problem leads to a special hybrid MDP (with mixed continuous and discrete variables) and poses significant new challenges due to the appearance of an underlying Markov jump parameter governing the mode of the system dynamics. Specifically, the state of a MJLS does not form a Markov chain and hence one cannot study the MJLS control problem as a MDP with solely continuous state variable. However, one can augment the state and the jump parameter to obtain a MDP with a mixed continuous/discrete state space. We discuss how control theory sheds light on the policy parameterization of such hybrid MDPs. Then we modify the widely used natural policy gradient method to directly learn the optimal state feedback control policy for MJLS without identifying either the system dynamics or the transition probability of the switching parameter. We implement the (data-driven) natural policy gradient method on different MJLS examples. Our simulation results suggest that the natural gradient method can efficiently learn the optimal controller for MJLS with unknown dynamics.