论文标题
Hardnn:CNN中的特征图漏洞评估
HarDNN: Feature Map Vulnerability Evaluation in CNNs
论文作者
论文摘要
由于卷积神经网络(CNN)越来越多地用于安全至关重要的应用中,因此面对硬件错误,它们的行为能够可靠。瞬态硬件错误可能会在执行过程中渗透不良状态,从而导致软件操纵错误,这可能会对高级决策产生不利影响。本文介绍了Hardnn,这是一种以软件为导向的方法来识别CNN推断期间脆弱的计算,并根据其在存在硬件错误的情况下损坏推断输出的倾向选择性保护它们。我们表明,Hardnn可以使用统计错误注入活动准确估计CNN中特征图(FMAP)的相对脆弱性,并探索启发式方法以进行快速脆弱性评估。基于这些结果,我们分析了系统设计人员可以用来采用选择性保护的错误覆盖范围和计算开销之间的权衡。结果表明,与Hardnn相比,增加计算的弹性是超线性的。例如,Hardnn只有30%的其他计算,将Squeezenet的弹性提高了10倍。
As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors. Transient hardware errors may percolate undesirable state during execution, resulting in software-manifested errors which can adversely affect high-level decision making. This paper presents HarDNN, a software-directed approach to identify vulnerable computations during a CNN inference and selectively protect them based on their propensity towards corrupting the inference output in the presence of a hardware error. We show that HarDNN can accurately estimate relative vulnerability of a feature map (fmap) in CNNs using a statistical error injection campaign, and explore heuristics for fast vulnerability assessment. Based on these results, we analyze the tradeoff between error coverage and computational overhead that the system designers can use to employ selective protection. Results show that the improvement in resilience for the added computation is superlinear with HarDNN. For example, HarDNN improves SqueezeNet's resilience by 10x with just 30% additional computations.