论文标题

对抗性的鲁棒性与懒惰训练不符

Adversarial Robustness is at Odds with Lazy Training

论文作者

Wang, Yunjuan, Ullah, Enayat, Mianjy, Poorya, Arora, Raman

论文摘要

最近的作品表明,随机神经网络存在对抗性示例[Daniely和Schacham,2020],并且可以使用梯度上升的单一步骤找到这些示例[Bubeck等,2021]。在这项工作中,我们将这项工作扩展到了神经网络的“懒惰训练”,这是深度学习理论中的主要模型,在该理论中,神经网络可以有效地学习。我们表明,保证良好并享受强大的计算保证的过度参数化神经网络仍然容易受到使用梯度上升的单步产生的攻击。

Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021]. In this work, we extend this line of work to "lazy training" of neural networks -- a dominant model in deep learning theory in which neural networks are provably efficiently learnable. We show that over-parametrized neural networks that are guaranteed to generalize well and enjoy strong computational guarantees remain vulnerable to attacks generated using a single step of gradient ascent.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源