论文标题

朝着对深度识别的可转移对抗性攻击

Towards Transferable Adversarial Attack against Deep Face Recognition

论文作者

Zhong, Yaoyao, Deng, Weihong

论文摘要

由于发展深度学习方法的发展,在过去五年中,面部识别取得了巨大的成功。然而,已经发现深卷积神经网络(DCNN)容易受到对抗性例子的影响。特别是,存在可转移的对抗示例可能会严重阻碍DCNN的鲁棒性,因为这种类型的攻击可以完全黑盒方式应用,而无需在目标系统上查询。在这项工作中,我们首先通过显示特征级方法优于标签级方法的优越性,首先研究面部识别中可转移的对抗攻击的特征。然后,为了进一步提高特征水平对抗性示例的可传递性,我们提出了DFANET,DFANET是一种基于辍学的方法,用于卷积层中,可以增加替代模型的多样性并获得类似合奏的效果。对具有各种培训数据库,损失功能和网络体系结构的最先进面部模型进行的广泛实验表明,所提出的方法可以显着提高现有攻击方法的可传递性。最后,通过将DFANET应用于LFW数据库,我们生成了一组新的对抗面对,可以成功攻击四个商业API而无需任何查询。该TALFW数据库可用于促进有关深度识别的鲁棒性和防御的研究。

Face recognition has achieved great success in the last five years due to the development of deep learning methods. However, deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples. In particular, the existence of transferable adversarial examples can severely hinder the robustness of DCNNs since this type of attacks can be applied in a fully black-box manner without queries on the target system. In this work, we first investigate the characteristics of transferable adversarial attacks in face recognition by showing the superiority of feature-level methods over label-level methods. Then, to further improve transferability of feature-level adversarial examples, we propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models and obtain ensemble-like effects. Extensive experiments on state-of-the-art face models with various training databases, loss functions and network architectures show that the proposed method can significantly enhance the transferability of existing attack methods. Finally, by applying DFANet to the LFW database, we generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries. This TALFW database is available to facilitate research on the robustness and defense of deep face recognition.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源