论文标题

部分可观测时空混沌系统的无模型预测

Projective Ranking-based GNN Evasion Attacks

论文作者

Zhang, He, Yuan, Xingliang, Zhou, Chuan, Pan, Shirui

论文摘要

图形神经网络(GNN)为图形相关任务提供了有希望的学习方法。但是,GNN有遭受对抗攻击的风险。当前逃避攻击方法的两个主要局限性被突出显示:(1)当前的Gradargmax忽略了扰动的“长期”益处。在某些情况下,它面临着零梯度和无效的福利估计。 (2)在基于增强学习的攻击方法中,当攻击预算变化时,学习的攻击策略可能无法转移。为此,我们首先制定了扰动空间,并提出了一个评估框架和投射排名方法。我们的目标是学习强大的攻击策略,然后尽可能少地调整它,以在动态预算设置下生成对抗样本。在我们的方法中,基于共同信息,我们对每种扰动的攻击益处进行对有效攻击策略的攻击益处。通过预测策略,我们的方法会大大最大程度地减少在攻击预算变化时学习新攻击策略的成本。在Gradargmax和RL-S2V的比较评估中,结果表明我们的方法具有高攻击性能和有效的可转移性。我们方法的可视化还揭示了对抗样品产生的各种攻击模式。

Graph neural networks (GNNs) offer promising learning methods for graph-related tasks. However, GNNs are at risk of adversarial attacks. Two primary limitations of the current evasion attack methods are highlighted: (1) The current GradArgmax ignores the "long-term" benefit of the perturbation. It is faced with zero-gradient and invalid benefit estimates in certain situations. (2) In the reinforcement learning-based attack methods, the learned attack strategies might not be transferable when the attack budget changes. To this end, we first formulate the perturbation space and propose an evaluation framework and the projective ranking method. We aim to learn a powerful attack strategy then adapt it as little as possible to generate adversarial samples under dynamic budget settings. In our method, based on mutual information, we rank and assess the attack benefits of each perturbation for an effective attack strategy. By projecting the strategy, our method dramatically minimizes the cost of learning a new attack strategy when the attack budget changes. In the comparative assessment with GradArgmax and RL-S2V, the results show our method owns high attack performance and effective transferability. The visualization of our method also reveals various attack patterns in the generation of adversarial samples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源