论文标题

Deepnnk:使用多层插值来解释深层模型及其概括

DeepNNK: Explaining deep models and their generalization using polytope interpolation

论文作者

Shekkizhar, Sarath, Ortega, Antonio

论文摘要

基于神经网络的现代机器学习系统在学习复杂的数据模式方面取得了巨大的成功,同时能够对看不见的数据点做出良好的预测。但是,这些系统的有限解释性阻碍了现实世界中几个领域的进一步进步和应用。耗时的模型选择和预测性解释性中所面临的困难来体现这种困境,尤其是在存在对抗性例子的情况下。在本文中,我们通过引入局部多层插值方法来更好地理解神经网络。提出的深度非负核回归(NNK)插值框架是非参数,理论上简单且几何直觉。我们证明了基于实例的深度学习模型的解释性,并开发了一种使用一个算出估计的概念概括属性的模型的方法。最后,我们对对抗性和生成的例子进行了合理化,从机器学习的插值视图来看,这是不可避免的。

Modern machine learning systems based on neural networks have shown great success in learning complex data patterns while being able to make good predictions on unseen data points. However, the limited interpretability of these systems hinders further progress and application to several domains in the real world. This predicament is exemplified by time consuming model selection and the difficulties faced in predictive explainability, especially in the presence of adversarial examples. In this paper, we take a step towards better understanding of neural networks by introducing a local polytope interpolation method. The proposed Deep Non Negative Kernel regression (NNK) interpolation framework is non parametric, theoretically simple and geometrically intuitive. We demonstrate instance based explainability for deep learning models and develop a method to identify models with good generalization properties using leave one out estimation. Finally, we draw a rationalization to adversarial and generative examples which are inevitable from an interpolation view of machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源