论文标题

一种解释不符合全部:对机器学习透明度的互动解释的承诺

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

论文作者

Sokol, Kacper, Flach, Peter

论文摘要

基于机器学习算法的预测系统的透明度的需求是由于它们在行业中不断增长的结果。每当黑框算法预测影响人类事务时,应审查这些算法的内部工作,并向相关利益相关者(包括系统工程师,系统的运营商和正在决定的案例决定的个人)解释其决定。尽管可以使用多种可解释性和解释性方法,但它们都无法满足有关各方可能需要的所有不同期望和竞争目标的灵丹妙药。我们在本文中讨论了使用对比解释的示例 - 一种可解释的机器学习的最新方法,以讨论交互式机器学习对改善黑盒系统透明度的承诺。 具体来说,我们通过交互性调整其条件陈述并提取后续行动“如果呢?”来提取其他解释来展示如何通过交互调整其条件陈述来个性化反事实解释。问题。我们在构建,部署和展示此类系统方面的经验使我们能够列出所需的属性以及潜在的局限性,可用于指导交互式解释器的开发。虽然自定义互动媒介,即包括各种通信渠道的用户界面,可能会给人个性化的印象,但我们认为调整解释本身及其内容更为重要。为此,除了明确告知解释者的局限性和警告外,还必须考虑诸如广度,范围,上下文,目的和目标之类的属性...

The need for transparency of predictive systems based on Machine Learning algorithms arises as a consequence of their ever-increasing proliferation in the industry. Whenever black-box algorithmic predictions influence human affairs, the inner workings of these algorithms should be scrutinised and their decisions explained to the relevant stakeholders, including the system engineers, the system's operators and the individuals whose case is being decided. While a variety of interpretability and explainability methods is available, none of them is a panacea that can satisfy all diverse expectations and competing objectives that might be required by the parties involved. We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning. Specifically, we show how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up "What if?" questions. Our experience in building, deploying and presenting this type of system allowed us to list desired properties as well as potential limitations, which can be used to guide the development of interactive explainers. While customising the medium of interaction, i.e., the user interface comprising of various communication channels, may give an impression of personalisation, we argue that adjusting the explanation itself and its content is more important. To this end, properties such as breadth, scope, context, purpose and target of the explanation have to be considered, in addition to explicitly informing the explainee about its limitations and caveats...

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源