论文标题

梅根:多解释图注意网络

MEGAN: Multi-Explanation Graph Attention Network

论文作者

Teufel, Jonas, Torresi, Luca, Reiser, Patrick, Friederich, Pascal

论文摘要

我们提出了一个多解释图注意网络(MEGAN)。与现有的图形解释方法不同,我们的网络可以沿多个通道产生节点和边缘归因性解释,其数量与任务规格无关。这对于提高图回归预测的解释性至关重要,因为可以将解释分为正面和负面证据,并将其分为参考值。此外,我们的基于注意力的网络是完全可区分的,可以通过解释的方式积极培训解释。我们首先在具有已知地面真相解释的合成图回归数据集上验证模型。我们的网络的表现优于单一和多解释情况的现有基线解释方法,在解释监督过程中实现了几乎完美的解释精度。最后,我们在多个现实世界数据集上演示了模型的功能。我们发现,我们的模型产生了与人类对这些任务的直觉一致的稀疏高保真解释。

We propose a multi-explanation graph attention network (MEGAN). Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels, the number of which is independent of task specifications. This proves crucial to improve the interpretability of graph regression predictions, as explanations can be split into positive and negative evidence w.r.t to a reference value. Additionally, our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner. We first validate our model on a synthetic graph regression dataset with known ground-truth explanations. Our network outperforms existing baseline explainability methods for the single- as well as the multi-explanation case, achieving near-perfect explanation accuracy during explanation supervision. Finally, we demonstrate our model's capabilities on multiple real-world datasets. We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源