论文标题
VCLIMB:一种新颖的视频课程增量学习基准
vCLIMB: A Novel Video Class Incremental Learning Benchmark
论文作者
论文摘要
持续学习(CL)在视频域中探讨了。现有的少数作品包含在任务上具有不平衡类分布的拆分,或者在不合适的数据集中研究问题。我们介绍了Vclimb,这是一个新颖的视频持续学习基准。 VCLIMB是一种标准化的测试床,可分析视频持续学习中深层模型的灾难性忘记。与以前的工作相反,我们专注于按照一系列不相交任务进行训练的模型,并在整个任务中统一分配类的持续学习。我们对VCLIMB中现有CL方法进行了深入的评估,并观察到视频数据中的两个独特挑战。在框架级别执行以下记忆中存储的实例。其次,未修剪的培训数据影响框架采样策略的有效性。我们通过提出时间一致性正则化来解决这两个挑战,该正规化可以应用于基于内存的持续学习方法之上。我们的方法可显着提高基线,在未经修剪的持续学习任务上最多可提高24%。
Continual learning (CL) is under-explored in the video domain. The few existing works contain splits with imbalanced class distributions over the tasks, or study the problem in unsuitable datasets. We introduce vCLIMB, a novel video continual learning benchmark. vCLIMB is a standardized test-bed to analyze catastrophic forgetting of deep models in video continual learning. In contrast to previous work, we focus on class incremental continual learning with models trained on a sequence of disjoint tasks, and distribute the number of classes uniformly across the tasks. We perform in-depth evaluations of existing CL methods in vCLIMB, and observe two unique challenges in video data. The selection of instances to store in episodic memory is performed at the frame level. Second, untrimmed training data influences the effectiveness of frame sampling strategies. We address these two challenges by proposing a temporal consistency regularization that can be applied on top of memory-based continual learning methods. Our approach significantly improves the baseline, by up to 24% on the untrimmed continual learning task.