论文标题

一项关于对比的自我监督学习的调查

A Survey on Contrastive Self-supervised Learning

论文作者

Jaiswal, Ashish, Babu, Ashwin Ramesh, Zadeh, Mohammad Zaki, Banerjee, Debapriya, Makedon, Fillia

论文摘要

自我监督的学习越来越受欢迎,因为它有能力避免注释大规模数据集的成本。它能够采用自定义的伪标签作为监督,并将学习的表示形式用于多个下游任务。具体而言,对比度学习最近已成为用于计算机视觉,自然语言处理(NLP)和其他领域的自学学习方法中的主要组成部分。它旨在嵌入相同样本的增强版本,同时试图从不同样本中推开嵌入。本文对遵循对比方法的自我监督方法进行了广泛的审查。这项工作解释了对比度学习设置中常用的借口任务,然后是到目前为止提出的不同体系结构。接下来,我们对多个下游任务的不同方法进行了性能比较,例如图像分类,对象检测和动作识别。最后,我们以当前方法的局限性以及对进一步的技术和未来方向的需求得出结论。

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudo labels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning methods for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we have a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make substantial progress.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源