论文标题

本地不变的解释:通过局部不变学习迈向稳定而单向的解释

Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant Learning

论文作者

Dhurandhar, Amit, Ramamurthy, Karthikeyan, Ahuja, Kartik, Arya, Vijay

论文摘要

局部可解释的模型不可知解释(Lime)方法是用示例级别解释黑框模型的最流行方法之一。尽管已经提出了许多变体,但很少有一种简单的方法可以产生稳定且直观的高保真解释。在这项工作中,我们通过提出一种模型不可知的局部解释方法来提供一种新的观点,该方法受到不变风险最小化(IRM)原则的启发 - 最初是为(全球)分布式概括提出的,以提供如此高的忠诚解释,以提供在附近附近的示例中也是稳定且无关紧要的。我们的方法基于游戏理论公式,从理论上讲,我们的方法具有强烈的趋势,即在我们想解释的示例的示例中,黑框函数的梯度突然改变了符号,而在其他情况下则更加谨慎,并且会选择更保守的(功能)属性,这一行为可以高度希望追求。从经验上讲,我们在表格,图像和文本数据上表明,使用随机扰动形成的邻里的解释质量比石灰要好得多,在某些情况下,甚至与其他使用从数据歧管采样的现实邻居的方法相媲美。鉴于学习一个歧视的形式是创建现实的邻居或项目解释通常是昂贵的,甚至可能是不可能的,这是可取的。此外,我们的算法训练简单且有效,可以确定稳定的输入功能,用于黑盒的本地决策,而无需访问旁边信息,例如(部分)因果图,如最近的一些作品所示。

Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level. Although many variants have been proposed, few provide a simple way to produce high fidelity explanations that are also stable and intuitive. In this work, we provide a novel perspective by proposing a model agnostic local explanation method inspired by the invariant risk minimization (IRM) principle -- originally proposed for (global) out-of-distribution generalization -- to provide such high fidelity explanations that are also stable and unidirectional across nearby examples. Our method is based on a game theoretic formulation where we theoretically show that our approach has a strong tendency to eliminate features where the gradient of the black-box function abruptly changes sign in the locality of the example we want to explain, while in other cases it is more careful and will choose a more conservative (feature) attribution, a behavior which can be highly desirable for recourse. Empirically, we show on tabular, image and text data that the quality of our explanations with neighborhoods formed using random perturbations are much better than LIME and in some cases even comparable to other methods that use realistic neighbors sampled from the data manifold. This is desirable given that learning a manifold to either create realistic neighbors or to project explanations is typically expensive or may even be impossible. Moreover, our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information such as a (partial) causal graph as has been seen in some recent works.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源