论文标题

PODNET:小任务的汇总输出蒸馏增量学习

PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning

论文作者

Douillard, Arthur, Cord, Matthieu, Ollion, Charles, Robert, Thomas, Valle, Eduardo

论文摘要

终身学习吸引了很多关注,但是现有的作品仍在努力抵抗灾难性的遗忘,并在长期的增量学习中积累知识。在这项工作中,我们提出了Podnet,这是一个受代表学习启发的模型。通过仔细平衡记住旧课程和学习新课程之间的妥协,Podnet与灾难性的遗忘作斗争,即使在很长的小型增量任务中,也无法通过当前作品探索的设置。 Podnet在整个模型中应用了有效的基于空间的蒸馏损失,对现有艺术进行了创新,并在整个模型中应用了一个代表,包括每个类别的多个代理向量。我们彻底验证了这些创新,将PODNet与三个最新模型进行了比较:CIFAR100,Imagenet100和Imagenet1000。我们的结果显示了Podnet比现有艺术的重要优势,精度的增长率分别为12.10、6.51和2.85个百分点。代码可从https://github.com/arthurdouillard/incremental_learning.pytorch获得。

Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning. In this work, we propose PODNet, a model inspired by representation learning. By carefully balancing the compromise between remembering the old classes and learning new ones, PODNet fights catastrophic forgetting, even over very long runs of small incremental tasks --a setting so far unexplored by current works. PODNet innovates on existing art with an efficient spatial-based distillation-loss applied throughout the model and a representation comprising multiple proxy vectors for each class. We validate those innovations thoroughly, comparing PODNet with three state-of-the-art models on three datasets: CIFAR100, ImageNet100, and ImageNet1000. Our results showcase a significant advantage of PODNet over existing art, with accuracy gains of 12.10, 6.51, and 2.85 percentage points, respectively. Code is available at https://github.com/arthurdouillard/incremental_learning.pytorch

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源