论文标题

Diresnet:VHR遥感图像中道路提取的方向剩余网络

DiResNet: Direction-aware Residual Network for Road Extraction in VHR Remote Sensing Images

论文作者

Ding, Lei, Bruzzone, Lorenzo

论文摘要

由于因素(由阴影,树木,建筑物等引起的因素)和道路表面的阶层差异,因此以非常高的分辨率(VHR)遥感图像(RSI)的二进制分割一直是一项艰巨的任务。卷积神经网络(CNN)的广泛使用已大大提高了细分精度,并使任务端到端训练。但是,从结果的完整性和连接性方面,仍然存在利润。在本文中,我们考虑了道路提取的特定背景,并提出了一个方向感知的残留网络(diresnet),其中包括三个主要贡献:1)具有反卷积层的不对称残留分割网络和结构监督,以增强道路拓扑的学习(diresseg); 2)对局部方向的像素级监督,以增强线性特征的嵌入; 3)优化分割结果(DiresRef)的改进网络。对两个基准数据集(马萨诸塞州数据集和Deepglobe数据集)的消融研究证实了提出的设计的有效性。与其他方法的比较实验表明,所提出的方法在总体准确性和F1得分方面具有优势。该代码可在以下网址提供:https://github.com/ggsding/diresnet。

The binary segmentation of roads in very high resolution (VHR) remote sensing images (RSIs) has always been a challenging task due to factors such as occlusions (caused by shadows, trees, buildings, etc.) and the intra-class variances of road surfaces. The wide use of convolutional neural networks (CNNs) has greatly improved the segmentation accuracy and made the task end-to-end trainable. However, there are still margins to improve in terms of the completeness and connectivity of the results. In this paper, we consider the specific context of road extraction and present a direction-aware residual network (DiResNet) that includes three main contributions: 1) An asymmetric residual segmentation network with deconvolutional layers and a structural supervision to enhance the learning of road topology (DiResSeg); 2) A pixel-level supervision of local directions to enhance the embedding of linear features; 3) A refinement network to optimize the segmentation results (DiResRef). Ablation studies on two benchmark datasets (the Massachusetts dataset and the DeepGlobe dataset) have confirmed the effectiveness of the presented designs. Comparative experiments with other approaches show that the proposed method has advantages in both overall accuracy and F1-score. The code is available at: https://github.com/ggsDing/DiResNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源