论文标题

最大漫游多任务学习

Maximum Roaming Multi-Task Learning

论文作者

Pascal, Lucas, Michiardi, Pietro, Bost, Xavier, Huet, Benoit, Zuluaga, Maria A.

论文摘要

由于其在资源使用和性能方面提供的优势,多任务学习已经获得了知名度。尽管如此,有关多个任务的参数的联合优化仍然是一个积极的研究主题。事实证明,在不同任务之间进行参数的子分类是放宽共享权重的优化约束的有效方法,套件可能是不连接或重叠的。但是,这种方法的一个缺点是它可以削弱通常由联合任务优化产生的归纳偏差。在这项工作中,我们提出了一种新颖的方法,可以在不削弱电感偏差的情况下分区参数空间。具体来说,我们提出了最大漫游,这是一种受辍学启发的方法,随机改变了参数分区,同时迫使他们在受调节频率的情况下访问尽可能多的任务,以便网络完全适应每个更新。我们通过对各种视觉多任务数据集的实验来研究方法的特性。实验结果表明,漫游带来的正则化对性能的影响比通常的分区优化策略更大。与最近的多任务学习公式相比,总体方法是灵活的,易于适用,可提供出色的正则化,并始终如一地提高性能。

Multi-task learning has gained popularity due to the advantages it provides with respect to resource usage and performance. Nonetheless, the joint optimization of parameters with respect to multiple tasks remains an active research topic. Sub-partitioning the parameters between different tasks has proven to be an efficient way to relax the optimization constraints over the shared weights, may the partitions be disjoint or overlapping. However, one drawback of this approach is that it can weaken the inductive bias generally set up by the joint task optimization. In this work, we present a novel way to partition the parameter space without weakening the inductive bias. Specifically, we propose Maximum Roaming, a method inspired by dropout that randomly varies the parameter partitioning, while forcing them to visit as many tasks as possible at a regulated frequency, so that the network fully adapts to each update. We study the properties of our method through experiments on a variety of visual multi-task data sets. Experimental results suggest that the regularization brought by roaming has more impact on performance than usual partitioning optimization strategies. The overall method is flexible, easily applicable, provides superior regularization and consistently achieves improved performances compared to recent multi-task learning formulations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源