论文标题

图像分类中的对抗机器学习:对辩护人观点的调查

Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective

论文作者

Machado, Gabriel Resende, Silva, Eugênio, Goldschmidt, Ronaldo Ribeiro

论文摘要

深度学习算法已经实现了图像分类的最新性能,甚至在关键安全应用中也使用了,例如生物识别识别系统和自动驾驶汽车。但是,最近的作品表明,这些算法甚至可以超过人类的能力,这很容易受到对抗性例子的影响。在计算机视觉中,对抗性示例是包含由恶意优化算法产生的微妙扰动的图像,以便愚弄分类器。为了减轻这些漏洞,文学中不断提出了许多对策。然而,事实证明,设计有效的防御机制已被证明是一项艰巨的任务,因为许多方法已经证明对适应性攻击者无效。因此,这本独立的论文旨在为所有读者提供对图像分类中对抗机器学习的最新研究进展的回顾,但是有了防守者的观点。在这里,引入了针对对抗性攻击和防御的新分类法,并提供了有关对抗性例子的存在的讨论。此外,与调查相比,研究人员在设计和评估防御措施时也应考虑相关的指导。最后,根据审查的文献,讨论了一些有希望的未来研究途径。

Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源