论文标题
通过观察人类与环境的互动,在线改进了移动机器人的场景识别模型
Online Refinement of a Scene Recognition Model for Mobile Robots by Observing Human's Interaction with Environments
论文作者
论文摘要
本文介绍了一种在线改进的方法,即考虑到可穿越的植物,柔性植物零件,机器人可以在移动时将其推开,用于机器人导航的场景识别模型。在考虑可穿越的植物到路径上的场景识别系统中,由于可识别为障碍的可穿越的植物,因此错误分类可能会导致机器人被卡住。然而,在任何估计方法中,错误分类都是不可避免的。在这项工作中,我们提出了一个框架,该框架可以在机器人操作期间即时精心精制语义分割模型。我们在不进行微调的情况下,基于重量印迹的重量刻度介绍了一些弹片细分。通过观察人与植物部位的相互作用来收集培训数据。我们提出了新颖的健壮权重,以减轻相互作用产生的面膜中包含的噪声的影响。通过使用现实世界数据进行实验评估了所提出的方法,并显示出胜过普通的权重,并通过模型蒸馏提供竞争性结果,同时需要较少的计算成本。
This paper describes a method of online refinement of a scene recognition model for robot navigation considering traversable plants, flexible plant parts which a robot can push aside while moving. In scene recognition systems that consider traversable plants growing out to the paths, misclassification may lead the robot to getting stuck due to the traversable plants recognized as obstacles. Yet, misclassification is inevitable in any estimation methods. In this work, we propose a framework that allows for refining a semantic segmentation model on the fly during the robot's operation. We introduce a few-shot segmentation based on weight imprinting for online model refinement without fine-tuning. Training data are collected via observation of a human's interaction with the plant parts. We propose novel robust weight imprinting to mitigate the effect of noise included in the masks generated by the interaction. The proposed method was evaluated through experiments using real-world data and shown to outperform an ordinary weight imprinting and provide competitive results to fine-tuning with model distillation while requiring less computational cost.