论文标题
用于自动驾驶的多模式级联融合技术
Multi-Modality Cascaded Fusion Technology for Autonomous Driving
论文作者
论文摘要
多模式融合是自主驾驶系统稳定性的保证。在本文中,我们提出了一个一般的多模式级联融合框架,利用了决策水平和特征级融合的优势,利用目标位置,大小,速度,外观和信心来实现准确的融合结果。在融合过程中,进行了动态坐标比对(DCA),以减少传感器之间不同模态之间的误差。此外,亲和力矩阵的计算是传感器融合的核心模块,我们提出了一种亲和力损失,可改善深度亲和力网络(DAN)的性能。最后,与终点融合方法相比,提议的逐步级联融合框架更容易解释和灵活。关于Nuscenes [2]数据集的广泛实验表明,我们的方法达到了最先进的性能。数据表明,我们的方法实现了最先进的性能。
Multi-modality fusion is the guarantee of the stability of autonomous driving systems. In this paper, we propose a general multi-modality cascaded fusion framework, exploiting the advantages of decision-level and feature-level fusion, utilizing target position, size, velocity, appearance and confidence to achieve accurate fusion results. In the fusion process, dynamic coordinate alignment(DCA) is conducted to reduce the error between sensors from different modalities. In addition, the calculation of affinity matrix is the core module of sensor fusion, we propose an affinity loss that improves the performance of deep affinity network(DAN). Last, the proposed step-by-step cascaded fusion framework is more interpretable and flexible compared to the end-toend fusion methods. Extensive experiments on Nuscenes [2] dataset show that our approach achieves the state-of-theart performance.dataset show that our approach achieves the state-of-the-art performance.