论文标题
在物理世界中为强大识别的防御贴片
Defensive Patches for Robust Recognition in the Physical World
论文作者
论文摘要
为了在现实世界中的高风险环境中运行,深度学习系统必须忍受不断挫败其健壮性的声音。数据端防御通过对输入数据的操作而不是修改模型来提高鲁棒性,它由于其实践的可行性而引起了密集的关注。但是,以前的数据端防御显示出对多种模型的各种噪声和弱传递性的概括较低。在强大的识别取决于本地和全球功能的事实中,我们提出了一个防御性补丁生成框架,通过帮助模型更好地利用这些功能来解决这些问题。对于针对各种噪音的概括,我们将特定于类的可识别模式注入限制的本地贴剂之前,因此防御贴片可以保留对特定类别的更多可识别功能,这是在噪音下提供更好识别的领先模型。对于多个模型的可传输性,我们指导防御贴片以捕获一类内部的全局特征相关性,以便它们可以激活模型共享的全局感知并在模型之间更好地转移。我们的防御贴片只需将目标对象贴在目标对象周围,就可以在实践中提高应用鲁棒性的巨大潜力。广泛的实验表明,我们以很大的利润优于其他实验(在数字和物理世界中平均而言,对抗和腐败的鲁棒性都可以提高20+%的精度)。我们的代码可从https://github.com/nlsde-safety-team/defensepatch获得
To operate in real-world high-stakes environments, deep learning systems have to endure noises that have been continuously thwarting their robustness. Data-end defense, which improves robustness by operations on input data instead of modifying models, has attracted intensive attention due to its feasibility in practice. However, previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models. Motivated by the fact that robust recognition depends on both local and global features, we propose a defensive patch generation framework to address these problems by helping models better exploit these features. For the generalization against diverse noises, we inject class-specific identifiable patterns into a confined local patch prior, so that defensive patches could preserve more recognizable features towards specific classes, leading models for better recognition under noises. For the transferability across multiple models, we guide the defensive patches to capture more global feature correlations within a class, so that they could activate model-shared global perceptions and transfer better among models. Our defensive patches show great potentials to improve application robustness in practice by simply sticking them around target objects. Extensive experiments show that we outperform others by large margins (improve 20+\% accuracy for both adversarial and corruption robustness on average in the digital and physical world). Our codes are available at https://github.com/nlsde-safety-team/DefensivePatch