论文标题
SAFEML:通过统计差异度量对机器学习分类器的安全监控
SafeML: Safety Monitoring of Machine Learning Classifiers through Statistical Difference Measure
论文作者
论文摘要
确保机器学习的安全性和解释性(ML)是一个主题,这是一个与数据驱动的应用程序一起进入安全至关重要的应用领域的相关性,传统上是对高安全标准的承诺,而这些标准对高度安全标准不满意,而这些标准对其他不可访问的黑箱系统不满意。特别是安全与保障之间的互动是一个核心挑战,因为违反安全性可能导致安全性损害。本文在ML系统操作过程中适用的单个保护概念中解决安全性和安全性的贡献是,基于经验累积分布函数(ECDF)的距离测量值(ECDF)对数据驱动系统的行为和操作环境进行了积极监控。我们使用分配转移检测措施,包括Kolmogorov-Smirnov,Kuiper,Kuiper,Anderson-Darling,Wasserstein和Mightersertein-wassertein-Anderson-darling-darling措施。我们的初步发现表明,该方法可以为检测ML组件的应用程序上下文在安全性安全性中是否有效。我们的初步代码和结果可在https://github.com/isorokos/safeml上找到。
Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this paper to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behaviour and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets (XOR, Spiral, Circle) and current security-specific datasets for intrusion detection (CICIDS2017) of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that the approach can provide a basis for detecting whether the application context of an ML component is valid in the safety-security. Our preliminary code and results are available at https://github.com/ISorokos/SafeML.