论文标题

通过删除表示形式进行稳健的面部验证

Robust Face Verification via Disentangled Representations

论文作者

Arvinte, Marius, Tewfik, Ahmed H., Vishwanath, Sriram

论文摘要

我们引入了一种可靠的算法以进行面部验证,即决定是否是同一个人。我们的方法是一种新颖的想法,即对对抗性鲁棒性进行深层生成网络的想法。我们在培训期间使用GenerativeModel用作在线增强方法,而不是消除对抗噪声的测试时间放大器。我们的体系结构使用对比损失术语和一个分离的生成模型来采样负对。我们没有随机放置两个真实的图像,而是将图像与其类修饰的对应物配对,同时保持其内容(姿势,头部倾斜,头发等)完整。这使我们能够有效地为对比度损失进行硬性负面对。我们在实验上表明,当与对抗内部求解器相比,提出的方案会收敛时,并且在针对白盒子物理攻击进行评估时,与最先进的方法相比具有更高的清洁准确性。

We introduce a robust algorithm for face verification, i.e., deciding whether twoimages are of the same person or not. Our approach is a novel take on the idea ofusing deep generative networks for adversarial robustness. We use the generativemodel during training as an online augmentation method instead of a test-timepurifier that removes adversarial noise. Our architecture uses a contrastive loss termand a disentangled generative model to sample negative pairs. Instead of randomlypairing two real images, we pair an image with its class-modified counterpart whilekeeping its content (pose, head tilt, hair, etc.) intact. This enables us to efficientlysample hard negative pairs for the contrastive loss. We experimentally show that, when coupled with adversarial training, the proposed scheme converges with aweak inner solver and has a higher clean and robust accuracy than state-of-the-art-methods when evaluated against white-box physical attacks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源