论文标题

基于集成梯度的可转移对抗攻击

Transferable Adversarial Attack based on Integrated Gradients

论文作者

Huang, Yi, Kong, Adams Wai-Kin

论文摘要

深度神经网络对对抗性例子的脆弱性引起了社区的极大关注。三种方法,优化标准目标功能,利用注意图和平滑决策表面,通常用于制作对抗性示例。通过紧密整合这三种方法,我们在本文中提出了一种基于集成梯度(TAIG)的新的简单算法,可转移攻击,该算法可以找到黑盒攻击的高度可转移的对抗性示例。与使用多个计算术语或与其他方法结合的先前方法不同,TAIG将这三种方法集成到一个单个项中。研究了在直线路径上计算其集成梯度的两个版本,并研究了随机分段线性路径。这两个版本都具有强大的可传递性,并且可以与先前的方法无缝合作。实验结果表明,TAIG的表现胜过最新方法。该代码将在https://github.com/yihuang2016/taig上找到

The vulnerability of deep neural networks to adversarial examples has drawn tremendous attention from the community. Three approaches, optimizing standard objective functions, exploiting attention maps, and smoothing decision surfaces, are commonly used to craft adversarial examples. By tightly integrating the three approaches, we propose a new and simple algorithm named Transferable Attack based on Integrated Gradients (TAIG) in this paper, which can find highly transferable adversarial examples for black-box attacks. Unlike previous methods using multiple computational terms or combining with other methods, TAIG integrates the three approaches into one single term. Two versions of TAIG that compute their integrated gradients on a straight-line path and a random piecewise linear path are studied. Both versions offer strong transferability and can seamlessly work together with the previous methods. Experimental results demonstrate that TAIG outperforms the state-of-the-art methods. The code will available at https://github.com/yihuang2016/TAIG

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源