论文标题
通过双模式深网中提取艺术品中描述的对象的关联和含义
Extracting associations and meanings of objects depicted in artworks through bi-modal deep networks
论文作者
论文摘要
我们提出了一个基于深网的新型双模式系统,以解决学习关联的问题以及“作品”图像中描述的对象的简单含义,例如美术绘画和图纸。我们的整体系统处理图像和相关文本,以了解单个对象的图像,其身份以及它们所表示的抽象含义之间的关联。与描述描绘对象并推断谓词的过去的深网不同,我们的系统可以确定意义上的对象(“指示符”)及其关联(“含义”)以及目标艺术品的基本总体含义。我们的系统的精度为48%,召回了78%的召回,在一套精选的荷兰Vanitas绘画中,F1度量为0.6,这一类型因其专注于在执行执行时注重大量进口的含义而闻名。我们开发并测试了我们的系统上的精美绘画,但我们的一般方法可以应用于其他撰写的图像。
We present a novel bi-modal system based on deep networks to address the problem of learning associations and simple meanings of objects depicted in "authored" images, such as fine art paintings and drawings. Our overall system processes both the images and associated texts in order to learn associations between images of individual objects, their identities and the abstract meanings they signify. Unlike past deep nets that describe depicted objects and infer predicates, our system identifies meaning-bearing objects ("signifiers") and their associations ("signifieds") as well as basic overall meanings for target artworks. Our system had precision of 48% and recall of 78% with an F1 metric of 0.6 on a curated set of Dutch vanitas paintings, a genre celebrated for its concentration on conveying a meaning of great import at the time of their execution. We developed and tested our system on fine art paintings but our general methods can be applied to other authored images.