论文标题

预测业务流程监控的反事实解释

Counterfactual Explanations for Predictive Business Process Monitoring

论文作者

Huang, Tsung-Hao, Metzger, Andreas, Pohl, Klaus

论文摘要

预测业务流程监视越来越多利用复杂的预测模型。尽管复杂的模型比简单模型始终达到更高的预测准确性,但一个主要的缺点是它们缺乏解释性,这限制了他们在实践中的采用。因此,我们看到对可解释的预测业务流程监控的兴趣日益增加,该过程旨在提高预测模型的解释性。现有的解决方案着重于提供事实解释。尽管事实解释可能会有所帮助,但人类通常不会问为什么要做出特定的预测,而是为什么做出了另一个预测,即人类对反事实解释感兴趣。尽管可解释的AI的研究产生了几种有希望的技术来产生反事实解释,但将它们直接应用于预测过程监测可能会提供不切实际的解释,因为它们忽略了基本的过程约束。我们提出了洛雷利(Loreley),这是一种针对预测过程监测的反事实解释技术,它扩展了传说,这是一种最新的可解释的AI技术。我们对解释生成过程施加控制流程限制,以确保现实的反事实解释。此外,我们扩展了知识以启用解释多类分类模型。使用真实公共数据集的实验结果表明,Loreley可以平均保真度为97.69 \%,并产生现实的反事实解释。

Predictive business process monitoring increasingly leverages sophisticated prediction models. Although sophisticated models achieve consistently higher prediction accuracy than simple models, one major drawback is their lack of interpretability, which limits their adoption in practice. We thus see growing interest in explainable predictive business process monitoring, which aims to increase the interpretability of prediction models. Existing solutions focus on giving factual explanations.While factual explanations can be helpful, humans typically do not ask why a particular prediction was made, but rather why it was made instead of another prediction, i.e., humans are interested in counterfactual explanations. While research in explainable AI produced several promising techniques to generate counterfactual explanations, directly applying them to predictive process monitoring may deliver unrealistic explanations, because they ignore the underlying process constraints. We propose LORELEY, a counterfactual explanation technique for predictive process monitoring, which extends LORE, a recent explainable AI technique. We impose control flow constraints to the explanation generation process to ensure realistic counterfactual explanations. Moreover, we extend LORE to enable explaining multi-class classification models. Experimental results using a real, public dataset indicate that LORELEY can approximate the prediction models with an average fidelity of 97.69\% and generate realistic counterfactual explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源