论文标题

Quanos-versarial噪声敏感性驱动神经网络的混合量化

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

论文作者

Panda, Priyadarshini

论文摘要

深度神经网络(DNN)已被证明容易受到对抗性攻击的影响,其中,通过在输入上施加轻微的扰动,模型被愚弄了。随着文字的出现以及在嵌入式设备中实现智能的必要性,DNN的低功率和安全硬件实现至关重要。在本文中,我们研究了使用量化来抵抗对抗性攻击的使用。最近的一些研究报道了通过量化减少DNN的能量需求的显着结果。但是,没有先前的工作考虑了DNN的对抗灵敏度与其对量化的影响之间的关系。我们提出了基于对抗性噪声灵敏度(ANS)执行层特异性混合量化的框架。我们确定了针对DNN的新型噪声稳定性度量(ANS),即,每层计算对对抗噪声的敏感性。 ANS允许确定每层最佳的位宽度的原则性方法,从而产生对抗性鲁棒性以及能量效率,而精度的损失最小。本质上,Quanos根据其对对抗扰动的贡献分配了层的重要性,并因此扩展了层的精度。 Quanos的关键优势是它不依赖于预训练的模型,并且可以在训练的初始阶段应用。我们评估了Quanos在具有数据控件和子字并行性功能的可缩放倍数和累积(MAC)硬件体系结构上的好处。我们在CIFAR10上的实验CIFAR100数据集表明,在ISO-Accuracy处,Quanos在对抗性鲁棒性(3%-4%)方面均优于均匀量化的8位精度基线(高3%-4%),同时产生改善的压缩(> 5X)和能量节省(> 2X)。

Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial attacks, wherein, a model gets fooled by applying slight perturbations on the input. With the advent of Internet-of-Things and the necessity to enable intelligence in embedded devices, low-power and secure hardware implementation of DNNs is vital. In this paper, we investigate the use of quantization to potentially resist adversarial attacks. Several recent studies have reported remarkable results in reducing the energy requirement of a DNN through quantization. However, no prior work has considered the relationship between adversarial sensitivity of a DNN and its effect on quantization. We propose QUANOS- a framework that performs layer-specific hybrid quantization based on Adversarial Noise Sensitivity (ANS). We identify a novel noise stability metric (ANS) for DNNs, i.e., the sensitivity of each layer's computation to adversarial noise. ANS allows for a principled way of determining optimal bit-width per layer that incurs adversarial robustness as well as energy-efficiency with minimal loss in accuracy. Essentially, QUANOS assigns layer significance based on its contribution to adversarial perturbation and accordingly scales the precision of the layers. A key advantage of QUANOS is that it does not rely on a pre-trained model and can be applied in the initial stages of training. We evaluate the benefits of QUANOS on precision scalable Multiply and Accumulate (MAC) hardware architectures with data gating and subword parallelism capabilities. Our experiments on CIFAR10, CIFAR100 datasets show that QUANOS outperforms homogenously quantized 8-bit precision baseline in terms of adversarial robustness (3%-4% higher) while yielding improved compression (>5x) and energy savings (>2x) at iso-accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源