论文标题

具有可证明性能规格的燃油喷射量的神经网络虚拟传感器

Neural Network Virtual Sensors for Fuel Injection Quantities with Provable Performance Specifications

论文作者

Wong, Eric, Schneider, Tim, Schmitt, Joerg, Schmidt, Frank R., Kolter, J. Zico

论文摘要

最近的工作表明,在受到输入扰动的情况下,可以在模型的输出中学习具有可证明保证的神经网络,但是这些工作主要集中在为图像分类器的对抗性示例辩护。在本文中,我们研究了如何将这些可证明的保证自然应用于其他现实世界环境中,即为测量发动机内燃油喷射量的强大虚拟传感器获得性能规格。我们首先证明,在这种情况下,即使是简单的神经网络模型也非常容易受到合理水平的对抗传感器噪声,这些传感器噪声能够将标准神经网络的平均相对误差从6.6%增加到43.8%。然后,我们利用学习可证明可靠的网络并验证鲁棒性属性的方法,从而产生了强大的模型,我们可以证明该模型在任何传感器噪声下最多可以保证最多具有16.5%的平均相对误差。此外,我们还展示了如何将特定的燃油喷射量间隔定位为最大化某些范围的鲁棒性,从而使我们能够训练虚拟传感器进行燃油喷射器,从而可以保证,这可以保证在噪声下最多有10.69%的相对误差,同时保持3%的相对误差,同时在0.6至1.0的归一化燃油注入范围内对非对抗性数据的相对误差。

Recent work has shown that it is possible to learn neural networks with provable guarantees on the output of the model when subject to input perturbations, however these works have focused primarily on defending against adversarial examples for image classifiers. In this paper, we study how these provable guarantees can be naturally applied to other real world settings, namely getting performance specifications for robust virtual sensors measuring fuel injection quantities within an engine. We first demonstrate that, in this setting, even simple neural network models are highly susceptible to reasonable levels of adversarial sensor noise, which are capable of increasing the mean relative error of a standard neural network from 6.6% to 43.8%. We then leverage methods for learning provably robust networks and verifying robustness properties, resulting in a robust model which we can provably guarantee has at most 16.5% mean relative error under any sensor noise. Additionally, we show how specific intervals of fuel injection quantities can be targeted to maximize robustness for certain ranges, allowing us to train a virtual sensor for fuel injection which is provably guaranteed to have at most 10.69% relative error under noise while maintaining 3% relative error on non-adversarial data within normalized fuel injection ranges of 0.6 to 1.0.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源