论文标题
通过分布式优化进行持续学习:可可忘了吗?
Continual Learning with Distributed Optimization: Does CoCoA Forget?
论文作者
论文摘要
我们专注于持续的学习问题,其中任务依次到达,目的是在新到达的任务上表现良好,而不会在先前看到的任务上进行性能退化。与关注集中式环境的持续学习文献相反,我们研究了分布式估计框架。我们考虑建立良好的分布式学习算法可可。我们为过份术情况的迭代得出封闭形式的表达式。我们根据问题的过度参数化说明了算法的收敛性和误差性能。我们的结果表明,根据问题的维度和数据生成假设,可可可以通过一系列任务进行连续学习,即,它可以在不忘记先前学习的任务的情况下学习新任务,一次只能访问一个任务。
We focus on the continual learning problem where the tasks arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the distributed estimation framework. We consider the well-established distributed learning algorithm COCOA. We derive closed form expressions for the iterations for the overparametrized case. We illustrate the convergence and the error performance of the algorithm based on the over/under-parameterization of the problem. Our results show that depending on the problem dimensions and data generation assumptions, COCOA can perform continual learning over a sequence of tasks, i.e., it can learn a new task without forgetting previously learned tasks, with access only to one task at a time.