论文标题

在线持续学习中的凝结样本

Sample Condensation in Online Continual Learning

论文作者

Sangermano, Mattia, Carta, Antonio, Cossu, Andrea, Bacciu, Davide

论文摘要

在线持续学习是一个充满挑战的学习情况,该模型必须从非平稳的数据流中学习,其中每个样本只能看到一次。主要的挑战是在避免灾难性遗忘的同时逐步学习,即在从新数据中学习时忘记先前获得的知识的问题。在这种情况下,一种流行的解决方案是使用较小的内存来保留旧数据并随着时间的推移进行排练。不幸的是,由于内存尺寸有限,随着时间的推移,内存质量会恶化。在本文中,我们提出了一种基于新型重放的持续学习策略OLCGM,该策略使用知识凝结技术连续压缩记忆并更好地利用其有限的尺寸。样品冷凝步骤压缩了旧样品,而不是像其他重播策略那样将其删除。结果,实验表明,与数据的复杂性相比,每当记忆预算受到限制时,与最新的重播策略相比,OLCGM都会提高最终准确性。

Online Continual learning is a challenging learning scenario where the model must learn from a non-stationary stream of data where each sample is seen only once. The main challenge is to incrementally learn while avoiding catastrophic forgetting, namely the problem of forgetting previously acquired knowledge while learning from new data. A popular solution in these scenario is to use a small memory to retain old data and rehearse them over time. Unfortunately, due to the limited memory size, the quality of the memory will deteriorate over time. In this paper we propose OLCGM, a novel replay-based continual learning strategy that uses knowledge condensation techniques to continuously compress the memory and achieve a better use of its limited size. The sample condensation step compresses old samples, instead of removing them like other replay strategies. As a result, the experiments show that, whenever the memory budget is limited compared to the complexity of the data, OLCGM improves the final accuracy compared to state-of-the-art replay strategies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源