论文标题
跳过连接至关重要:关于用重新NETS生成的对抗示例的可传递性
Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
论文作者
论文摘要
跳过连接是当前最新的深神经网络(DNN)的重要组成部分,例如Resnet,WideSnet,Densenet和Resnext。尽管他们在建立更深入,更强大的DNN方面取得了巨大的成功,但在本文中,我们发现了跳过联系的令人惊讶的安全弱点。使用跳过连接可以更轻松地生成高度可转移的对抗性示例。具体而言,在类似于Resnet的(带有跳过连接)神经网络中,梯度可以通过跳过连接或残留模块倒退。我们发现,使用跳过连接的更多梯度,而不是根据衰减因子的残留模块,使人们可以使用高传递性来制作对抗性示例。我们的方法称为跳过梯度法(SGM)。我们针对最先进的DNN进行全面转移攻击,包括重新NET,齿状,启动,启动 - 挤压网络(SENET)和训练有素的DNNS。我们表明,在几乎所有情况下,在梯度流程上使用SGM可以大大提高制作攻击的可转移性。此外,SGM可以轻松地与现有的黑盒攻击技术结合使用,并获得对最新可传递性方法的高度改进。我们的发现不仅激发了对DNN的建筑脆弱性的新研究,而且还为设计安全的DNN体系结构设计了进一步的挑战。
Skip connections are an essential component of current state-of-the-art deep neural networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their huge success in building deeper and more powerful DNNs, we identify a surprising security weakness of skip connections in this paper. Use of skip connections allows easier generation of highly transferable adversarial examples. Specifically, in ResNet-like (with skip connections) neural networks, gradients can backpropagate through either skip connections or residual modules. We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability. Our method is termed Skip Gradient Method(SGM). We conduct comprehensive transfer attacks against state-of-the-art DNNs including ResNets, DenseNets, Inceptions, Inception-ResNet, Squeeze-and-Excitation Network (SENet) and robustly trained DNNs. We show that employing SGM on the gradient flow can greatly improve the transferability of crafted attacks in almost all cases. Furthermore, SGM can be easily combined with existing black-box attack techniques, and obtain high improvements over state-of-the-art transferability methods. Our findings not only motivate new research into the architectural vulnerability of DNNs, but also open up further challenges for the design of secure DNN architectures.