论文标题

少数民族报告防御:防御对抗斑块

Minority Reports Defense: Defending Against Adversarial Patches

论文作者

McCoyd, Michael, Park, Won, Chen, Steven, Shah, Neil, Roggenkemper, Ryan, Hwang, Minjune, Liu, Jason Xinyu, Wagner, David

论文摘要

深度学习的图像分类很容易受到对抗性攻击的影响,即使攻击者只会更改图像的一小部分。我们提出了针对补丁攻击的防御,基于部分阻塞每个候选补丁位置周围的图像,以使几个闭合完全隐藏了补丁。我们在CIFAR-10,Fashion Mnist和MNIST上证明了我们的防御对一定尺寸的补丁攻击提供了认证的安全性。

Deep learning image classification is vulnerable to adversarial attack, even if the attacker changes just a small patch of the image. We propose a defense against patch attacks based on partially occluding the image around each candidate patch location, so that a few occlusions each completely hide the patch. We demonstrate on CIFAR-10, Fashion MNIST, and MNIST that our defense provides certified security against patch attacks of a certain size.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源