论文标题
神经网络中的对比解释
Contrastive Explanations in Neural Networks
论文作者
论文摘要
视觉解释是基于视觉特征的逻辑参数,这些特征证明神经网络的预测是合理的。当前的视觉解释方式回答了$ $ $'为什么\ text {} p?'$的问题。这些$问题为何在广泛的环境中运作,从而提供了在某些情况下与无关紧要的答案。我们建议限制这些$为什么基于某些上下文$ q $的问题,以便我们的解释回答$ $ $'为什么\ text {} p,\ text {}而不是\ text {}而不是\ text {} q?'$的对比。在本文中,我们对神经网络的对比视觉解释的结构进行了形式。我们根据神经网络定义对比度,并提出一种提取定义对比的方法。然后,我们将提取的对比度用作现有$'为什么\ text {} p?'$技术的插件,特别是grad-cam。我们证明了它们在分析大规模识别,细粒识别,地下地震分析和图像质量评估的应用中分析网络和数据的价值。
Visual explanations are logical arguments based on visual features that justify the predictions made by neural networks. Current modes of visual explanations answer questions of the form $`Why \text{ } P?'$. These $Why$ questions operate under broad contexts thereby providing answers that are irrelevant in some cases. We propose to constrain these $Why$ questions based on some context $Q$ so that our explanations answer contrastive questions of the form $`Why \text{ } P, \text{} rather \text{ } than \text{ } Q?'$. In this paper, we formalize the structure of contrastive visual explanations for neural networks. We define contrast based on neural networks and propose a methodology to extract defined contrasts. We then use the extracted contrasts as a plug-in on top of existing $`Why \text{ } P?'$ techniques, specifically Grad-CAM. We demonstrate their value in analyzing both networks and data in applications of large-scale recognition, fine-grained recognition, subsurface seismic analysis, and image quality assessment.