论文标题
用于抓住透明和镜面对象的多模式转移学习
Multi-modal Transfer Learning for Grasping Transparent and Specular Objects
论文作者
论文摘要
最新的对象抓握方法依赖于深度传感来计划强大的抓握,但是市售的深度传感器无法检测到透明和镜面对象。为了提高对此类对象的掌握性能,我们引入了一种通过从现有的Uni-Modal模型引导来学习多模式感知模型的方法。这种转移学习方法仅需要一个先前存在的单模式抓地模型和配对的多模式图像数据,以进行训练,因此,它已经提出了对地面真相掌握成功标签的需求,也需要进行真正的掌握尝试。我们的实验表明,我们的方法能够可靠地掌握透明和反射性对象。视频和补充材料可从https://sites.google.com/view/transparent-specular-grasping获得。
State-of-the-art object grasping methods rely on depth sensing to plan robust grasps, but commercially available depth sensors fail to detect transparent and specular objects. To improve grasping performance on such objects, we introduce a method for learning a multi-modal perception model by bootstrapping from an existing uni-modal model. This transfer learning approach requires only a pre-existing uni-modal grasping model and paired multi-modal image data for training, foregoing the need for ground-truth grasp success labels nor real grasp attempts. Our experiments demonstrate that our approach is able to reliably grasp transparent and reflective objects. Video and supplementary material are available at https://sites.google.com/view/transparent-specular-grasping.