论文标题
将AI解释为探索过程:Peircean绑架模型
Explaining AI as an Exploratory Process: The Peircean Abduction Model
论文作者
论文摘要
当前对“可解释AI”(XAI)的讨论并不多考虑绑架在解释推理中的作用(参见Mueller等,2018)。追求这一点可能值得,开发智能系统,以观察和分析绑架性推理以及将绑架性推理评估为可学习的技能。绑架性推论已通过多种方式定义。例如,它被定义为实现洞察力。最常见的是绑架被视为单一的音节推理行为,例如从给定的前提中进行演绎或归纳推断。相比之下,绑架概念的发起人 - 美国科学家/哲学家查尔斯·桑德斯·皮尔斯(Charles Sanders Peirce)---将绑架视为探索活动。在这方面,Peirce关于推理与现代心理学研究的结论保持一致的见解。由于绑架通常被定义为“推断最佳解释”,因此实施绑架推理和自动解释过程的挑战的挑战紧密相关。我们在本报告中探讨了这些链接。该分析提供了一个理论框架,以了解XAI研究人员已经在做什么,它解释了为什么某些XAI项目成功(或可能成功),并导致设计建议。
Current discussions of "Explainable AI" (XAI) do not much consider the role of abduction in explanatory reasoning (see Mueller, et al., 2018). It might be worthwhile to pursue this, to develop intelligent systems that allow for the observation and analysis of abductive reasoning and the assessment of abductive reasoning as a learnable skill. Abductive inference has been defined in many ways. For example, it has been defined as the achievement of insight. Most often abduction is taken as a single, punctuated act of syllogistic reasoning, like making a deductive or inductive inference from given premises. In contrast, the originator of the concept of abduction---the American scientist/philosopher Charles Sanders Peirce---regarded abduction as an exploratory activity. In this regard, Peirce's insights about reasoning align with conclusions from modern psychological research. Since abduction is often defined as "inferring the best explanation," the challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked. We explore these linkages in this report. This analysis provides a theoretical framework for understanding what the XAI researchers are already doing, it explains why some XAI projects are succeeding (or might succeed), and it leads to design advice.