论文标题
通过隐藏个人对社区探测的对抗性攻击
Adversarial Attack on Community Detection by Hiding Individuals
论文作者
论文摘要
已经证明,对抗图,即添加了不可察觉的扰动的图形,可能会导致深图模型在节点/图形分类任务上失败。在本文中,我们将对抗图扩展到社区检测的问题,这要困难得多。我们专注于黑框攻击,并旨在隐藏有针对性的个人免于检测深图社区检测模型,该模型在现实世界中有许多应用程序,例如,保护社交网络中的个人隐私并了解交易网络中的伪装模式。我们提出了一个迭代学习框架,该框架需要轮流更新两个模块:一个作为约束图生成器,另一个作为替代社区检测模型。我们还发现,我们的方法生成的对抗图可以转移到其他基于学习的社区检测模型中。
It has been demonstrated that adversarial graphs, i.e., graphs with imperceptible perturbations added, can cause deep graph models to fail on node/graph classification tasks. In this paper, we extend adversarial graphs to the problem of community detection which is much more difficult. We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models, which has many applications in real-world scenarios, for example, protecting personal privacy in social networks and understanding camouflage patterns in transaction networks. We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model. We also find that the adversarial graphs generated by our method can be transferred to other learning based community detection models.