论文标题
深度卷积网络中的层知识提取
Layerwise Knowledge Extraction from Deep Convolutional Networks
论文作者
论文摘要
知识提取用于将神经网络转换为符号描述,目的是产生更可理解的学习模型。核心挑战是找到一个比原始模型更可理解的解释,同时仍然忠实地代表该模型。深网的分布性质使许多人相信神经网络的隐藏特征不能用足够简单的逻辑描述来解释以至于可以理解。在本文中,我们提出了一种使用M-o-n规则的新型层次知识提取方法,该方法旨在在描述深层网络隐藏特征的规则的复杂性和准确性之间获得最佳的权衡。我们从经验上表明,这种方法产生的规则接近最佳的复杂性 - 折衷方案。我们将此方法应用于各种深网,发现在内部层中,我们通常无法找到具有令人满意的复杂性和准确性的规则,这表明规则提取是解释神经网络内部逻辑的通用方法,这可能是不可能的。但是,我们还发现,使用TANH或RELU激活功能在卷积神经网络和自动编码器中的软层层是可以通过规则提取的高度解释,而紧凑的规则通常只有128个中的3个单位,通常达到超过99%的精度。这表明规则提取可能是解释深神经网络的零件(或模块)的有用组件。
Knowledge extraction is used to convert neural networks into symbolic descriptions with the objective of producing more comprehensible learning models. The central challenge is to find an explanation which is more comprehensible than the original model while still representing that model faithfully. The distributed nature of deep networks has led many to believe that the hidden features of a neural network cannot be explained by logical descriptions simple enough to be comprehensible. In this paper, we propose a novel layerwise knowledge extraction method using M-of-N rules which seeks to obtain the best trade-off between the complexity and accuracy of rules describing the hidden features of a deep network. We show empirically that this approach produces rules close to an optimal complexity-error tradeoff. We apply this method to a variety of deep networks and find that in the internal layers we often cannot find rules with a satisfactory complexity and accuracy, suggesting that rule extraction as a general purpose method for explaining the internal logic of a neural network may be impossible. However, we also find that the softmax layer in Convolutional Neural Networks and Autoencoders using either tanh or relu activation functions is highly explainable by rule extraction, with compact rules consisting of as little as 3 units out of 128 often reaching over 99% accuracy. This shows that rule extraction can be a useful component for explaining parts (or modules) of a deep neural network.