论文标题
OBJCAVIT:使用自然语言模型和图像对象交叉注意来改善单眼深度估计
ObjCAViT: Improving Monocular Depth Estimation Using Natural Language Models And Image-Object Cross-Attention
论文作者
论文摘要
虽然单眼深度估计(MDE)是计算机视觉中的重要问题,但由于含糊不清,因此很难由3D场景压缩到只有2个维度。在现场是普遍的做法,将其视为简单的图像到图像翻译,而无需考虑场景的语义及其内部对象。相比之下,人类和动物已被证明使用更高级别的信息来解决MDE:对场景中物体的性质,它们的位置和可能相对于彼此的构型的知识,并且它们的明显大小都被证明有助于解决这种歧义。 在本文中,我们提出了一种新颖的方法来增强MDE性能,通过鼓励使用有关对象语义和场景中对象之间关系的知名信息。我们从语言模型中获得了新颖的Objcavit模块来源世界知识,并使用变压器注意的MDE问题的背景中学习了对象间的关系,并结合了明显的大小信息。我们的方法产生了高度准确的深度图,并在NYUV2和KITTI数据集上获得了竞争结果。我们的消融实验表明,在Objcavit模块中使用语言和交叉注意可以提高性能。代码在https://github.com/dylanauty/objcavit上发布。
While monocular depth estimation (MDE) is an important problem in computer vision, it is difficult due to the ambiguity that results from the compression of a 3D scene into only 2 dimensions. It is common practice in the field to treat it as simple image-to-image translation, without consideration for the semantics of the scene and the objects within it. In contrast, humans and animals have been shown to use higher-level information to solve MDE: prior knowledge of the nature of the objects in the scene, their positions and likely configurations relative to one another, and their apparent sizes have all been shown to help resolve this ambiguity. In this paper, we present a novel method to enhance MDE performance by encouraging use of known-useful information about the semantics of objects and inter-object relationships within a scene. Our novel ObjCAViT module sources world-knowledge from language models and learns inter-object relationships in the context of the MDE problem using transformer attention, incorporating apparent size information. Our method produces highly accurate depth maps, and we obtain competitive results on the NYUv2 and KITTI datasets. Our ablation experiments show that the use of language and cross-attention within the ObjCAViT module increases performance. Code is released at https://github.com/DylanAuty/ObjCAViT.