论文标题

对比相反的知识增强的元学习,以进行几次分类

Contrastive Knowledge-Augmented Meta-Learning for Few-Shot Classification

论文作者

Subramanyam, Rakshith, Heimann, Mark, Thathachar, Jayram, Anirudh, Rushil, Thiagarajan, Jayaraman J.

论文摘要

模型不可知的元学习算法旨在从几个观察到的任务中推断出先验,然后可以使用这些任务适应新任务,几乎没有例子。鉴于在现有基准中产生的任务的固有多样性,最近的方法使用单独的可学习结构(例如层次结构或图形)来实现对先验的特定任务适应。尽管这些方法产生了明显更好的元学习者,但我们的目标是在异质任务分配包含具有挑战性的分布变化和语义差异时提高其性能。为此,我们介绍了CAML(对比知识增强的元学习),这是一种新颖的方法,用于知识增强的几次学习,它演变了知识图以有效地编码历史经验,并采用了对比的蒸馏策略来利用编码的知识来实现​​基础学习者的任务意识到的调制。使用标准基准测试,我们在不同的几次学习方案中评估CAML的性能。除了标准的少量任务适应外,我们还考虑了我们的经验研究中更具挑战性的多域任务适应和几乎没有弹出的数据集泛化设置。我们的结果表明,CAML始终胜过最知名的方法,并实现了改善的概括。

Model agnostic meta-learning algorithms aim to infer priors from several observed tasks that can then be used to adapt to a new task with few examples. Given the inherent diversity of tasks arising in existing benchmarks, recent methods use separate, learnable structure, such as hierarchies or graphs, for enabling task-specific adaptation of the prior. While these approaches have produced significantly better meta learners, our goal is to improve their performance when the heterogeneous task distribution contains challenging distribution shifts and semantic disparities. To this end, we introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning that evolves a knowledge graph to effectively encode historical experience, and employs a contrastive distillation strategy to leverage the encoded knowledge for task-aware modulation of the base learner. Using standard benchmarks, we evaluate the performance of CAML in different few-shot learning scenarios. In addition to the standard few-shot task adaptation, we also consider the more challenging multi-domain task adaptation and few-shot dataset generalization settings in our empirical studies. Our results shows that CAML consistently outperforms best known approaches and achieves improved generalization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源