论文标题
知识蒸馏的课程温度
Curriculum Temperature for Knowledge Distillation
论文作者
论文摘要
大多数现有的蒸馏方法忽略了温度在损失函数中的灵活作用,并将其定义为可以通过效率低下的网格搜索决定的超参数。通常,温度控制两个分布之间的差异,并忠实地确定蒸馏任务的难度水平。保持恒定的温度,即固定的任务难度,通常对于成长中的学生在其进行性学习阶段而言是最佳选择。在本文中,我们提出了一种简单的基于课程的技术,称为知识蒸馏的课程温度(CTKD),该技术通过动态和可学习的温度来控制学生学习职业期间的任务难度水平。具体而言,遵循易于坚强的课程,我们逐渐增加了蒸馏损失W.R.T.温度,导致以对抗性方式增加蒸馏难度。作为一种易于使用的插件技术,CTKD可以无缝集成到现有的知识蒸馏框架中,并以可忽略的额外计算成本带来一般改进。关于CIFAR-100,Imagenet-2012和MS-Coco的广泛实验证明了我们方法的有效性。我们的代码可在https://github.com/zhengli97/ctkd上找到。
Most existing distillation methods ignore the flexible role of the temperature in the loss function and fix it as a hyper-parameter that can be decided by an inefficient grid search. In general, the temperature controls the discrepancy between two distributions and can faithfully determine the difficulty level of the distillation task. Keeping a constant temperature, i.e., a fixed level of task difficulty, is usually sub-optimal for a growing student during its progressive learning stages. In this paper, we propose a simple curriculum-based technique, termed Curriculum Temperature for Knowledge Distillation (CTKD), which controls the task difficulty level during the student's learning career through a dynamic and learnable temperature. Specifically, following an easy-to-hard curriculum, we gradually increase the distillation loss w.r.t. the temperature, leading to increased distillation difficulty in an adversarial manner. As an easy-to-use plug-in technique, CTKD can be seamlessly integrated into existing knowledge distillation frameworks and brings general improvements at a negligible additional computation cost. Extensive experiments on CIFAR-100, ImageNet-2012, and MS-COCO demonstrate the effectiveness of our method. Our code is available at https://github.com/zhengli97/CTKD.