论文标题

自动获取人类VR演示的结构化的语义模型

Automated acquisition of structured, semantic models of manipulation activities from human VR demonstration

论文作者

Haidu, Andrei, Beetz, Michael

论文摘要

在本文中,我们介绍了一个能够收集和注释人类,可以从虚拟环境中收集和注释的机器人的系统。人类的动作是使用带有全身和眼睛跟踪功能的现成的虚拟现实设备在模拟世界中映射的。虚拟世界中的所有互动都是物理模拟的,因此运动及其影响与现实世界紧密相关。在活动执行过程中,亚符号数据记录仪正在按照框架记录环境和人类凝视,从而使离线场景复制和重播能够。再加上物理引擎,在线监视器(符号数据记录仪)正在解析(使用各种语法)以及记录事件,动作及其在模拟世界中的影响。

In this paper we present a system capable of collecting and annotating, human performed, robot understandable, everyday activities from virtual environments. The human movements are mapped in the simulated world using off-the-shelf virtual reality devices with full body, and eye tracking capabilities. All the interactions in the virtual world are physically simulated, thus movements and their effects are closely relatable to the real world. During the activity execution, a subsymbolic data logger is recording the environment and the human gaze on a per-frame basis, enabling offline scene reproduction and replays. Coupled with the physics engine, online monitors (symbolic data loggers) are parsing (using various grammars) and recording events, actions, and their effects in the simulated world.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源