论文标题
与马尔可夫模型对基于亲和力的强化学习者的象征解释
Symbolic Explanation of Affinity-Based Reinforcement Learning Agents with Markov Models
论文作者
论文摘要
人工智能的扩散越来越依赖于模型理解。理解既需要一种解释 - 关于模型行为的人类推理,又是解释 - 模型功能的象征性表示。尽管必须对安全,信任和接受的透明度,但最先进的强化学习算法的不透明性掩盖了其学习策略的基础。我们已经开发了一种政策正规化方法,该方法主张了学识渊博的策略的全球内在亲和力。这些亲和力提供了一种关于政策行为的推理手段,从而使其固有地解释。我们已经在个性化的繁荣管理中展示了我们的方法,其中个人的支出行为及时决定了他们的投资策略,即不同的支出人物可能与不同的投资类别有不同的关联。现在,我们通过使用离散的Markov模型重现潜在的原型策略来解释我们的模型。这些全球替代物是原型政策的符号表示。
The proliferation of artificial intelligence is increasingly dependent on model understanding. Understanding demands both an interpretation - a human reasoning about a model's behavior - and an explanation - a symbolic representation of the functioning of the model. Notwithstanding the imperative of transparency for safety, trust, and acceptance, the opacity of state-of-the-art reinforcement learning algorithms conceals the rudiments of their learned strategies. We have developed a policy regularization method that asserts the global intrinsic affinities of learned strategies. These affinities provide a means of reasoning about a policy's behavior, thus making it inherently interpretable. We have demonstrated our method in personalized prosperity management where individuals' spending behavior in time dictate their investment strategies, i.e. distinct spending personalities may have dissimilar associations with different investment classes. We now explain our model by reproducing the underlying prototypical policies with discretized Markov models. These global surrogates are symbolic representations of the prototypical policies.