论文标题

通过抑制未知任务,通过分离的表示来减少过度学习

Reducing Overlearning through Disentangled Representations by Suppressing Unknown Tasks

论文作者

Panwar, Naveen, Tater, Tarun, Sankaran, Anush, Mani, Senthil

论文摘要

现有的学习视觉特征的深度学习方法倾向于过度学习并提取更多的信息,而不是手头任务所需的信息。从隐私保护的角度来看,输入视觉信息不受模型的保护;使该模型变得比训练更聪明。当前抑制其他任务学习的方法假设存在地面真相标签,以在训练时间内被抑制任务。在这项研究中,我们提出了一个三倍的新颖贡献:(i)一种模型不足的解决方案,用于通过抑制所有未知任务来降低模型过度学习,(ii)衡量训练有素的深度学习模型的信任得分的新颖指标,(iii)模拟的基准数据集,保留五个基本图像分类范围的模拟基础构图,该指标具有五个相差图像分类的范围,以研究常规化的任务。在第一组实验中,我们在PreserVertask数据集中学习了散布的表示形式,并抑制了五个流行的深度学习模型:VGG16,VGG19,Inception-V1,Mobilenet和Densenet的过度学习。此外,我们还展示了我们在颜色数据集上的框架的结果以及面部(DIF)和IMDB Wiki数据集中多样性中面部属性保存的实际应用。

Existing deep learning approaches for learning visual features tend to overlearn and extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be. Current approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time. In this research, we propose a three-fold novel contribution: (i) a model-agnostic solution for reducing model overlearning by suppressing all the unknown tasks, (ii) a novel metric to measure the trust score of a trained deep learning model, and (iii) a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models. In the first set of experiments, we learn disentangled representations and suppress overlearning of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet on PreserverTask dataset. Additionally, we show results of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源