论文标题
两级对抗性视觉语义耦合,用于广义零击学习
Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot Learning
论文作者
论文摘要
生成零射方法的性能主要取决于生成的特征的质量以及模型在视觉和语义域之间的知识转移程度。生成的特征的质量是模型捕获基础数据分布的几种模式的能力的直接结果。为了解决这些问题,我们提出了一个新的两级关节最大化想法,以在培训期间通过推理网络增强生成网络,这有助于我们的模型捕获数据的多种模式,并生成更好地代表基础数据分布的功能。这提供了强烈的跨模式相互作用,可有效地在视觉和语义域之间进行知识传递。此外,现有方法在生成合成图像特征或通过利用表示形式学习产生的潜在嵌入的零摄像分类器训练。在这项工作中,我们将这些范式统一为单个模型,除了综合图像特征外,还利用推理网络的表示能力来为最终的零摄像识别任务提供歧视性特征。我们在四个基准数据集(即CUB,FLO,AWA1和AWA2)上评估了我们的方法,并通过几种最新方法来表达其性能。我们还进行消融研究,以更仔细地分析和理解我们的方法,以完成广义的零照片学习任务。
The performance of generative zero-shot methods mainly depends on the quality of generated features and how well the model facilitates knowledge transfer between visual and semantic domains. The quality of generated features is a direct consequence of the ability of the model to capture the several modes of the underlying data distribution. To address these issues, we propose a new two-level joint maximization idea to augment the generative network with an inference network during training which helps our model capture the several modes of the data and generate features that better represent the underlying data distribution. This provides strong cross-modal interaction for effective transfer of knowledge between visual and semantic domains. Furthermore, existing methods train the zero-shot classifier either on generate synthetic image features or latent embeddings produced by leveraging representation learning. In this work, we unify these paradigms into a single model which in addition to synthesizing image features, also utilizes the representation learning capabilities of the inference network to provide discriminative features for the final zero-shot recognition task. We evaluate our approach on four benchmark datasets i.e. CUB, FLO, AWA1 and AWA2 against several state-of-the-art methods, and show its performance. We also perform ablation studies to analyze and understand our method more carefully for the Generalized Zero-shot Learning task.