论文标题
通过2层马尔可夫决策过程学习团队中的代理商之间的切换
Learning to Switch Among Agents in a Team via 2-Layer Markov Decision Processes
论文作者
论文摘要
在假设他们将以完全自主的方式运作的假设下,强化学习者主要是在开发和评估的 - 他们将采取所有行动。在这项工作中,我们的目标是开发算法,通过学习切换代理之间的控制,允许现有的强化学习剂在不同的自动化水平下运行。为此,我们首先正式定义了通过2层马尔可夫决策过程进行学习切换团队中代理人控制的问题。然后,我们开发了一种在线学习算法,该算法在代理商的策略和环境的过渡概率上使用上限置信度范围,以找到一系列切换策略。关于最佳切换策略的算法的遗憾是在学习步骤的数量中是均一的,并且每当代理在类似环境中运作多个团队时,我们的算法就可以从保持环境过渡概率的共同信心范围中获得很大的好处,并且对问题 - 问题 - 静止的遗憾的遗憾都比问题 - 静态的算法更好。避免障碍任务中的仿真实验说明了我们的理论发现,并证明,通过利用问题的特定结构,我们提出的算法优于问题 - 敏锐的算法。
Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner -- they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. The total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps and, whenever multiple teams of agents operate in a similar environment, our algorithm greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities and it enjoys a better regret bound than problem-agnostic algorithms. Simulation experiments in an obstacle avoidance task illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.