论文标题
自适应机器人应用的SIM2REAL GRASP姿势估计
Sim2Real Grasp Pose Estimation for Adaptive Robotic Applications
论文作者
论文摘要
自适应机器人技术在实现真正共同创造的网络物理系统中起着至关重要的作用。在机器人操纵任务中,最大的挑战之一是估计给定工件的姿势。即使最近的基于深度学习的模型显示出令人鼓舞的结果,但它们仍需要一个大量的数据集进行培训。在本文中,提出了两个基于视觉的多对象抓姿势估计模型(mogpe),实时mogpe和mogpe高精度。此外,一种基于域随机化的SIM2REAL方法,以减少现实差距并克服数据短缺。我们的方法在现实世界的机器人拾取实验中产生了80%和96.67%的成功率,分别使用MOGPE实时和Mogpe高表演模型。我们的框架为快速数据生成和模型培训提供了一种工业工具,并且需要最小的域特异性数据。
Adaptive robotics plays an essential role in achieving truly co-creative cyber physical systems. In robotic manipulation tasks, one of the biggest challenges is to estimate the pose of given workpieces. Even though the recent deep-learning-based models show promising results, they require an immense dataset for training. In this paper, two vision-based, multi-object grasp pose estimation models (MOGPE), the MOGPE Real-Time and the MOGPE High-Precision are proposed. Furthermore, a sim2real method based on domain randomization to diminish the reality gap and overcome the data shortage. Our methods yielded an 80% and a 96.67% success rate in a real-world robotic pick-and-place experiment, with the MOGPE Real-Time and the MOGPE High-Precision model respectively. Our framework provides an industrial tool for fast data generation and model training and requires minimal domain-specific data.