论文标题
自我发挥的消息传递以对比度几次学习
Self-Attention Message Passing for Contrastive Few-Shot Learning
论文作者
论文摘要
人类具有独特的能力,可以从几个示例中学习新表示,几乎没有监督。但是,深度学习模型需要大量的数据和监督才能在令人满意的水平上执行。无监督的几次学习(U-FSL)是追求弥合机器和人类之间的这一差距。受图形神经网络(GNN)在发现复杂样本之间关系的能力的启发,我们提出了一种基于自我注意力的新信息传递对比度学习方法(以SAMP-CLR的形式),用于U-FSL预训练。我们还提出了一种基于最佳运输(OT)的微调策略(我们称为OPT-TUNE),以有效地将任务意识诱导到我们新颖的端到端无监督的少量分类框架(SampTransfer)中。我们广泛的实验结果证实了Samptransfer在各种下游几杆分类方案中的功效,为Miniimagenet和Tieredimagenet基准设置了U-FSL的新最新,分别提供了高达7%+和5%+的改进。我们的进一步调查还证实,在充满挑战的跨域情景中,Samptransfer仍然与一些有监督的基线相比,并优于所有现有的U-FSL基线。我们的代码可以在我们的GitHub存储库中找到,网址为https://github.com/ojss/samptransfer/。
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision. Deep learning models, however, require an abundance of data and supervision to perform at a satisfactory level. Unsupervised few-shot learning (U-FSL) is the pursuit of bridging this gap between machines and humans. Inspired by the capacity of graph neural networks (GNNs) in discovering complex inter-sample relationships, we propose a novel self-attention based message passing contrastive learning approach (coined as SAMP-CLR) for U-FSL pre-training. We also propose an optimal transport (OT) based fine-tuning strategy (we call OpT-Tune) to efficiently induce task awareness into our novel end-to-end unsupervised few-shot classification framework (SAMPTransfer). Our extensive experimental results corroborate the efficacy of SAMPTransfer in a variety of downstream few-shot classification scenarios, setting a new state-of-the-art for U-FSL on both miniImagenet and tieredImagenet benchmarks, offering up to 7%+ and 5%+ improvements, respectively. Our further investigations also confirm that SAMPTransfer remains on-par with some supervised baselines on miniImagenet and outperforms all existing U-FSL baselines in a challenging cross-domain scenario. Our code can be found in our GitHub repository at https://github.com/ojss/SAMPTransfer/.