论文标题
基于刻痕的策略优化用于增强学习
Quantile-Based Policy Optimization for Reinforcement Learning
论文作者
论文摘要
经典的增强学习(RL)旨在优化预期的累积奖励。在这项工作中,我们考虑了RL设置,目标是优化累积奖励的分位数。我们将神经网络的策略控制动作参数化,并提出一种新型的策略梯度算法,称为基于分数的策略优化(QPO)及其基于变体的基于刻痕的近端策略优化(QPPO),以解决与分数目标的深度RL问题。 QPO使用在不同时间尺度上运行的两个耦合迭代来同时估算分位数和策略参数,并显示在某些条件下会收敛到全局最佳策略。我们的数值结果表明,所提出的算法在分位数标准下优于现有基线算法。
Classical reinforcement learning (RL) aims to optimize the expected cumulative rewards. In this work, we consider the RL setting where the goal is to optimize the quantile of the cumulative rewards. We parameterize the policy controlling actions by neural networks and propose a novel policy gradient algorithm called Quantile-Based Policy Optimization (QPO) and its variant Quantile-Based Proximal Policy Optimization (QPPO) to solve deep RL problems with quantile objectives. QPO uses two coupled iterations running at different time scales for simultaneously estimating quantiles and policy parameters and is shown to converge to the global optimal policy under certain conditions. Our numerical results demonstrate that the proposed algorithms outperform the existing baseline algorithms under the quantile criterion.