论文标题

稀疏对图神经网络的恶性攻击

Sparse Vicious Attacks on Graph Neural Networks

论文作者

Trappolini, Giovanni, Maiorca, Valentino, Severino, Silvio, Rodolà, Emanuele, Silvestri, Fabrizio, Tolomei, Gabriele

论文摘要

事实证明,图形神经网络(GNN)在图形结构数据的几个预测建模任务中已成功。 在这些任务中,链接预测是许多现实世界应用(例如推荐系统)的基本问题之一。 但是,GNN不能免疫对抗攻击,即精心制作的恶意例子,旨在欺骗预测模型。 在这项工作中,我们专注于对基于GNN的链接预测模型进行特定的白盒攻击,其中恶意节点的目的是出现在给定目标受害者的推荐节点列表中。 为了实现这一目标,攻击者节点还可以指望它直接控制的其他现有同龄人的合作,即在网络中注入许多``vicious''节点的能力。 具体来说,所有这些恶意节点都可以添加新的边缘或删除现有的节点,从而扰乱原始图。 因此,我们提出了野蛮人,一种新颖的框架和一种安装这种链接预测攻击的方法。 野蛮人将对手的目标制定为一项优化任务,从而达到了攻击的有效性和所需的恶意资源的稀疏性之间的平衡。 对现实世界和合成数据集进行的广泛实验表明,通过野蛮人实施的对抗性攻击确实取得了很高的攻击成功率,但使用了少量的恶性节点。 最后,尽管这些攻击需要完全了解目标模型,但我们表明它们可以成功地将其转移到其他黑框方法以进行链接预测。

Graph Neural Networks (GNNs) have proven to be successful in several predictive modeling tasks for graph-structured data. Amongst those tasks, link prediction is one of the fundamental problems for many real-world applications, such as recommender systems. However, GNNs are not immune to adversarial attacks, i.e., carefully crafted malicious examples that are designed to fool the predictive model. In this work, we focus on a specific, white-box attack to GNN-based link prediction models, where a malicious node aims to appear in the list of recommended nodes for a given target victim. To achieve this goal, the attacker node may also count on the cooperation of other existing peers that it directly controls, namely on the ability to inject a number of ``vicious'' nodes in the network. Specifically, all these malicious nodes can add new edges or remove existing ones, thereby perturbing the original graph. Thus, we propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks. SAVAGE formulates the adversary's goal as an optimization task, striking the balance between the effectiveness of the attack and the sparsity of malicious resources required. Extensive experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate yet using a small amount of vicious nodes. Finally, despite those attacks require full knowledge of the target model, we show that they are successfully transferable to other black-box methods for link prediction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源