论文标题
探索具有尖峰兼容梯度的尖峰神经网络中的对抗性攻击
Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient
论文作者
论文摘要
最近,通过时间启发的学习算法的反向传播被广泛引入SNN以提高性能,从而带来了可以准确攻击模型的时空梯度图的可能性。我们提出了两种方法,以应对梯度输入不相容性和梯度消失的挑战。具体而言,我们设计了一个梯度转换器的梯度,以将连续梯度转换为与尖峰输入兼容的三元梯度。然后,我们设计一个梯度触发器来构建三元梯度,该梯度可以在满足所有零梯度时以可控的周转率随机翻转尖峰输入。将这些方法汇总在一起,我们为通过监督算法训练的SNN构建了一种对抗性攻击方法。此外,我们分析了训练损失函数的影响和倒数第二层的触发阈值,这表明在跨透镜损失下有一个“陷阱”区域,可以通过阈值调整来避免。进行了广泛的实验来验证我们的解决方案的有效性。除了对影响因素的定量分析外,我们证明SNN比ANN更强大。这项工作可以帮助揭示SNN攻击中发生的情况,并可能刺激对SNN模型和神经形态设备的安全性的更多研究。
Recently, backpropagation through time inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given Spatio-temporal gradient maps. We propose two approaches to address the challenges of gradient input incompatibility and gradient vanishing. Specifically, we design a gradient to spike converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a gradient trigger to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs trained by supervised algorithms. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer, which indicates a "trap" region under the cross-entropy loss that can be escaped by threshold tuning. Extensive experiments are conducted to validate the effectiveness of our solution. Besides the quantitative analysis of the influence factors, we evidence that SNNs are more robust against adversarial attack than ANNs. This work can help reveal what happens in SNN attack and might stimulate more research on the security of SNN models and neuromorphic devices.