论文标题
简单有效的局部属性表示,用于零拍学习
Simple and effective localized attribute representations for zero-shot learning
论文作者
论文摘要
零射击学习(ZSL)旨在通过通过其语义描述来利用与观察类的关系来区分看不见的类别。最近的一些论文显示了局部特征的重要性以及对特征提取器的微调,以获得歧视性和可转移的特征。但是,这些方法需要复杂的注意或零件检测模块才能在视觉空间中执行明确的定位。相比之下,在本文中,我们提出了在语义/属性空间中的本地化表示形式,并具有简单但有效的管道,其中本地化是隐含的。为了关注属性表示形式,我们表明我们的方法在CUB和SUN数据集上获得了最先进的性能,并且还可以在AWA2数据集上实现竞争性结果,在视觉空间中具有明确的本地化,总体上优于更复杂的方法。我们的方法可以轻松实现,可以用作零射击学习的新基线。此外,我们的本地化表示形式高度可解释为特定于属性的热图。
Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions. Some recent papers have shown the importance of localized features together with fine-tuning the feature extractor to obtain discriminative and transferable features. However, these methods require complex attention or part detection modules to perform explicit localization in the visual space. In contrast, in this paper we propose localizing representations in the semantic/attribute space, with a simple but effective pipeline where localization is implicit. Focusing on attribute representations, we show that our method obtains state-of-the-art performance on CUB and SUN datasets, and also achieves competitive results on AWA2 dataset, outperforming generally more complex methods with explicit localization in the visual space. Our method can be implemented easily, which can be used as a new baseline for zero shot-learning. In addition, our localized representations are highly interpretable as attribute-specific heatmaps.