论文标题
点标签意识到的超级像素,用于多种物种的水下图像分割
Point Label Aware Superpixels for Multi-species Segmentation of Underwater Imagery
论文作者
论文摘要
使用水下车辆监测珊瑚礁可通过收集大量图像来增加海洋调查的范围和历史生态数据的可用性。可以使用经过训练的语义分割的模型来对该图像进行分析,但是它太成本且耗时,无法将图像密集标记,以用于训练监督模型。在这封信中,我们利用具有稀疏点标签的生态学家标记的照片图像图像。我们提出了一种点标签的方法,用于在超级像素区域内传播标签,以获得训练语义分割模型的增强地面真相。我们的点标签意识到的Superpixel方法利用了稀疏点标签,并使用学到的功能将像素簇精确地生成杂乱的复杂珊瑚图像中的单物种段。对于像素精度,我们的方法优于UCSD马赛克数据集上的先验方法,而对于标签传播任务,平均值IOU的方法为8.35%,同时将以前方法报告的计算时间缩短了76%。我们在UCSD Mosaics DataSet上训练DeepLabV3+体系结构,胜过语义细分的最先进2.91%,而平均值为9.65%,像素精度为4.19%,而EILAT数据集的Mean IOU为14.32%。
Monitoring coral reefs using underwater vehicles increases the range of marine surveys and availability of historical ecological data by collecting significant quantities of images. Analysis of this imagery can be automated using a model trained to perform semantic segmentation, however it is too costly and time-consuming to densely label images for training supervised models. In this letter, we leverage photo-quadrat imagery labeled by ecologists with sparse point labels. We propose a point label aware method for propagating labels within superpixel regions to obtain augmented ground truth for training a semantic segmentation model. Our point label aware superpixel method utilizes the sparse point labels, and clusters pixels using learned features to accurately generate single-species segments in cluttered, complex coral images. Our method outperforms prior methods on the UCSD Mosaics dataset by 3.62% for pixel accuracy and 8.35% for mean IoU for the label propagation task, while reducing computation time reported by previous approaches by 76%. We train a DeepLabv3+ architecture and outperform state-of-the-art for semantic segmentation by 2.91% for pixel accuracy and 9.65% for mean IoU on the UCSD Mosaics dataset and by 4.19% for pixel accuracy and 14.32% mean IoU for the Eilat dataset.