论文标题

量子对抗机器学习

Quantum Adversarial Machine Learning

论文作者

Lu, Sirui, Duan, Lu-Ming, Deng, Dong-Ling

论文摘要

对抗机器学习是一个新兴领域,重点是研究对抗性设置中机器学习方法的脆弱性,并相应地开发技术,以使对对抗性操作的学习鲁棒性。它在各种机器学习应用中起着至关重要的作用,并且最近在不同社区引起了极大的关注。在本文中,我们在量子机学习的背景下探讨了不同的对抗场景。我们发现,与基于经典神经网络的传统分类器类似,量子学习系统同样容易受到精心设计的对抗示例的影响,而与输入数据无关。特别是,我们发现,通过在原始合法样本中添加不可察觉的扰动,获得了几乎达到最先进精度的量子分类器可以最终欺骗。在不同情况下,用量子对抗学习明确证明了这一点,包括对现实生活的图像(例如,数据集MNIST中的手写数字图像),物质的学习阶段(例如,铁磁/proamagnetic order orders and Symorterry受保护的受保护的拓扑相)以及分类量子数据。此外,我们表明,基于手头的对抗示例的信息,可以设计实用的防御策略来与许多不同的攻击作斗争。我们的结果揭示了量子机器学习系统对对抗性扰动的显着脆弱性,这不仅揭示了理论上桥接机器学习和量子物理学的新观点,而且还为基于近期和未来量子技术的量子分类器的实用应用提供了宝贵的指导。

Adversarial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings and developing techniques accordingly to make learning robust to adversarial manipulations. It plays a vital role in various machine learning applications and has attracted tremendous attention across different communities recently. In this paper, we explore different adversarial scenarios in the context of quantum machine learning. We find that, similar to traditional classifiers based on classical neural networks, quantum learning systems are likewise vulnerable to crafted adversarial examples, independent of whether the input data is classical or quantum. In particular, we find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples obtained via adding imperceptible perturbations to the original legitimate samples. This is explicitly demonstrated with quantum adversarial learning in different scenarios, including classifying real-life images (e.g., handwritten digit images in the dataset MNIST), learning phases of matter (such as, ferromagnetic/paramagnetic orders and symmetry protected topological phases), and classifying quantum data. Furthermore, we show that based on the information of the adversarial examples at hand, practical defense strategies can be designed to fight against a number of different attacks. Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which not only reveals a novel perspective in bridging machine learning and quantum physics in theory but also provides valuable guidance for practical applications of quantum classifiers based on both near-term and future quantum technologies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源