论文标题
在增强学习中的离线措施如何表现?
How do Offline Measures for Exploration in Reinforcement Learning behave?
论文作者
论文摘要
足够的探索对于增强学习剂的成功至关重要。然而,很少以算法独立的方式评估探索。我们比较了文献中有关直观简单分布的三个基于数据的离线探索指标的行为,并突出了使用它们时要注意的问题。我们提出了第四个指标,统一的相对熵,并使用K-Nearest-neighbor或最近的邻居估计器实施它,这强调了实施选择对这些措施产生了深远的影响。
Sufficient exploration is paramount for the success of a reinforcement learning agent. Yet, exploration is rarely assessed in an algorithm-independent way. We compare the behavior of three data-based, offline exploration metrics described in the literature on intuitive simple distributions and highlight problems to be aware of when using them. We propose a fourth metric,uniform relative entropy, and implement it using either a k-nearest-neighbor or a nearest-neighbor-ratio estimator, highlighting that the implementation choices have a profound impact on these measures.