论文标题
机器人手术中工具分割的合成和实际输入
Synthetic and Real Inputs for Tool Segmentation in Robotic Surgery
论文作者
论文摘要
外科视频中的语义工具分割对于手术场景的理解和计算机辅助干预措施以及机器人自动化的发展至关重要。这个问题是具有挑战性的,因为不同的照明条件,出血,烟雾和遮挡可以降低算法的鲁棒性。目前,仍缺乏用于培训深度学习模型的数据,用于语义手术仪器分割,在本文中,我们表明,可以使用机器人运动学数据,加上腹腔镜图像来减轻标签问题。我们提出了一个新的基于深度学习的模型,用于对腹腔镜和模拟图像的并行处理,以构成手术工具的稳健分割。由于缺乏分割地面真相和运动学信息注释的腹腔镜框架,使用DA Vinci Research套件(DVRK)生成了新的自定义数据集并提供。
Semantic tool segmentation in surgical videos is important for surgical scene understanding and computer-assisted interventions as well as for the development of robotic automation. The problem is challenging because different illumination conditions, bleeding, smoke and occlusions can reduce algorithm robustness. At present labelled data for training deep learning models is still lacking for semantic surgical instrument segmentation and in this paper we show that it may be possible to use robot kinematic data coupled with laparoscopic images to alleviate the labelling problem. We propose a new deep learning based model for parallel processing of both laparoscopic and simulation images for robust segmentation of surgical tools. Due to the lack of laparoscopic frames annotated with both segmentation ground truth and kinematic information a new custom dataset was generated using the da Vinci Research Kit (dVRK) and is made available.