论文标题

通过快速的贝叶斯奖励推论从偏好中进行的安全模仿学习

Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences

论文作者

Brown, Daniel S., Coleman, Russell, Srinivasan, Ravi, Niekum, Scott

论文摘要

贝叶斯奖励从演示中学习可以在进行模仿学习时进行严格的安全性和不确定性分析。但是,对于复杂的控制问题,贝叶斯奖励学习方法通​​常在计算上是棘手的。我们提出了贝叶斯奖励外推(Bayesian Rex),这是一种高效的贝叶斯奖励学习算法,通过预先培训的低维功能通过自我监督任务进行编码,然后利用偏好而不是进行快速的贝叶斯的讨论,从而扩展到高维模仿学习问题。贝叶斯雷克斯(Bayesian Rex)可以学习从示威活动中玩Atari游戏,而无需获得游戏分数,并且可以在个人笔记本电脑上仅5分钟内就可以从后部产生100,000个样本。贝叶斯雷克斯(Bayesian Rex)还会导致模仿学习绩效,比仅学习奖励功能的点估计值的最先进方法具有竞争力或更好。最后,贝叶斯雷克斯(Bayesian Rex)实现了有效的高信政策评估,而无需访问奖励功能的样本。这些高信性绩效界限可用于对各种评估政策的绩效和风险进行排名,并提供一种检测奖励黑客行为的方法。

Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源