论文标题

剪辑-NAV:使用剪辑进行零拍视觉和语言导航

CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation

论文作者

Dorbala, Vishnu Sashank, Sigurdsson, Gunnar, Piramuthu, Robinson, Thomason, Jesse, Sukhatme, Gaurav S.

论文摘要

家庭环境在视觉上是多种多样的。在野外执行视觉和语言导航(VLN)的体现的代理必须能够处理这种多样性,同时也遵循任意语言说明。最近,诸如剪辑之类的视觉模型在零击对象识别的任务上表现出了出色的性能。在这项工作中,我们询问这些模型是否也能够进行零声语言接地。特别是,我们利用剪辑使用自然语言引用描述目标对象的表达式来解决零击VLN的新问题,与过去使用描述对象类的简单语言模板的过去作品相比。我们研究了剪辑在无需任何数据集特定填充的情况下做出顺序导航决策的能力,并研究了它如何影响代理商所采取的路径。我们对Reverie任务后的粗粒说明的结果证明了剪辑的导航能力,从成功率(SR)和成功加权路径长度(SPL)方面超过了监督的基线。更重要的是,我们定量地表明,与SOTA相比,通过相对变化(RCS)评估,我们的基于夹子的零击方法可以更好地概括在环境中表现出一致的性能。

Household environments are visually diverse. Embodied agents performing Vision-and-Language Navigation (VLN) in the wild must be able to handle this diversity, while also following arbitrary language instructions. Recently, Vision-Language models like CLIP have shown great performance on the task of zero-shot object recognition. In this work, we ask if these models are also capable of zero-shot language grounding. In particular, we utilize CLIP to tackle the novel problem of zero-shot VLN using natural language referring expressions that describe target objects, in contrast to past work that used simple language templates describing object classes. We examine CLIP's capability in making sequential navigational decisions without any dataset-specific finetuning, and study how it influences the path that an agent takes. Our results on the coarse-grained instruction following task of REVERIE demonstrate the navigational capability of CLIP, surpassing the supervised baseline in terms of both success rate (SR) and success weighted by path length (SPL). More importantly, we quantitatively show that our CLIP-based zero-shot approach generalizes better to show consistent performance across environments when compared to SOTA, fully supervised learning approaches when evaluated via Relative Change in Success (RCS).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源