论文标题

一个统一的插件框架,用于有效的数据降级和稳固的弃权

A Unified Plug-and-Play Framework for Effective Data Denoising and Robust Abstention

论文作者

Sarker, Krishanu, Yang, Xiulong, Li, Yang, Belkasim, Saeid, Ji, Shihao

论文摘要

深度神经网络(DNN)的成功很大程度上取决于数据质量。此外,预测性不确定性使高性能DNN的现实部署风险。在本文中,我们旨在通过提出一个利用基本数据密度的统一过滤框架来解决这两个问题,该框架可以有效地降低培训数据,并避免预测不确定的测试数据点。我们提出的框架利用数据分布的基础,以区分噪声和清洁数据样本,而无需对现有DNN架构或损失功能进行任何修改。在多个图像分类数据集和多个CNN体系结构上进行的广泛实验表明,我们简单而有效的框架可以超越验证培训数据和弃用不确定的测试数据的最先进技术。

The success of Deep Neural Networks (DNNs) highly depends on data quality. Moreover, predictive uncertainty makes high performing DNNs risky for real-world deployment. In this paper, we aim to address these two issues by proposing a unified filtering framework leveraging underlying data density, that can effectively denoise training data as well as avoid predicting uncertain test data points. Our proposed framework leverages underlying data distribution to differentiate between noise and clean data samples without requiring any modification to existing DNN architectures or loss functions. Extensive experiments on multiple image classification datasets and multiple CNN architectures demonstrate that our simple yet effective framework can outperform the state-of-the-art techniques in denoising training data and abstaining uncertain test data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源