论文标题
一个与全部用于深神经网络不正确(OVNNI)定量
One Versus all for deep Neural Network Incertitude (OVNNI) quantification
论文作者
论文摘要
深度神经网络(DNN)是强大的学习模型,但它们的结果并不总是可靠的。这是由于现代DNN通常是未校准的事实,我们无法表征其认知不确定性。在这项工作中,我们提出了一种新技术来量化数据的认知不确定性。该方法包括混合经过训练的DNN集团的预测,该集合将一个类别与所有其他类(OVA)分类与来自经过训练的标准DNN的预测,以执行所有(AVA)分类。一方面,AVA DNN对基本分类器的分数提供的调整可以进行更细粒度的类间隔。另一方面,两种类型的分类器会相互强制检测到分布(OOD)样品,从而完全规避了在训练过程中使用此类样品的要求。我们的方法在跨多个数据集和体系结构量化OOD数据时达到了最新的性能,同时几乎不需要高参数调整。
Deep neural networks (DNNs) are powerful learning models yet their results are not always reliable. This is due to the fact that modern DNNs are usually uncalibrated and we cannot characterize their epistemic uncertainty. In this work, we propose a new technique to quantify the epistemic uncertainty of data easily. This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification. On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation. On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training. Our method achieves state of the art performance in quantifying OOD data across multiple datasets and architectures while requiring little hyper-parameter tuning.