论文标题
一项关于在神经网络中进行连续转移学习的自我监管的预训练的调查
A Survey on Self-supervised Pre-training for Sequential Transfer Learning in Neural Networks
论文作者
论文摘要
深度神经网络通常在有监督的学习框架下进行培训,其中模型使用标记的数据学习了一项任务。从业人员可以利用未标记的数据或相关数据来改善模型性能,而不是仅仅依靠标记的数据,这通常更容易访问和无处不在。进行转移学习的自我监督的预训练正在成为一种越来越流行的技术,可以使用未标记的数据改善最新结果。它涉及首先对大量未标记数据进行预训练模型,然后将模型调整为目标任务。在这篇综述中,我们调查了自我监督的学习方法及其在顺序转移学习框架中的应用。我们提供了自我监督学习和转移学习的分类学概述,并突出了一些在不同领域设计预训练任务的突出方法。最后,我们讨论了最近的趋势,并提出了未来研究的领域。
Deep neural networks are typically trained under a supervised learning framework where a model learns a single task using labeled data. Instead of relying solely on labeled data, practitioners can harness unlabeled or related data to improve model performance, which is often more accessible and ubiquitous. Self-supervised pre-training for transfer learning is becoming an increasingly popular technique to improve state-of-the-art results using unlabeled data. It involves first pre-training a model on a large amount of unlabeled data, then adapting the model to target tasks of interest. In this review, we survey self-supervised learning methods and their applications within the sequential transfer learning framework. We provide an overview of the taxonomy for self-supervised learning and transfer learning, and highlight some prominent methods for designing pre-training tasks across different domains. Finally, we discuss recent trends and suggest areas for future investigation.