论文标题
RESTOREX-AI:通过可解释的AI系统指导图像恢复的对比方法
RestoreX-AI: A Contrastive Approach towards Guiding Image Restoration via Explainable AI Systems
论文作者
论文摘要
自动驾驶汽车和无人机等现代应用在很大程度上依赖于强大的对象检测技术。但是,天气腐败会阻碍对象可检测性,并对其导航和可靠性构成严重威胁。因此,需要有效地降解,降低和恢复技术。生成的对抗网络和变压器已被广泛用于图像恢复。但是,这些方法的培训通常是不稳定且耗时的。此外,当用于对象检测(OD)时,尽管图像清晰度,这些方法生成的输出图像可能会提供不令人满意的结果。在这项工作中,我们通过评估培训期间的恢复模型产生的图像来提出一种减轻此问题的对比方法。这种方法利用OD分数与注意图相结合,以预测恢复的图像对OD任务的有用性。我们使用两个新颖的条件gan和两种变压器方法进行实验,这些方法探测了在OD任务中对多天气腐败的拟议方法的鲁棒性。我们的方法在不利天气条件(如灰尘龙卷风和降雪)下,输入和恢复图像之间的地图平均增加了178%。我们报告了独特的案例,其中更大的denoising不能改善OD性能,而相反,嘈杂的图像显示出良好的结果。我们得出结论,需要解释性框架来弥合人类和机器感知之间的差距,尤其是在自动驾驶汽车的强大对象检测的背景下。
Modern applications such as self-driving cars and drones rely heavily upon robust object detection techniques. However, weather corruptions can hinder the object detectability and pose a serious threat to their navigation and reliability. Thus, there is a need for efficient denoising, deraining, and restoration techniques. Generative adversarial networks and transformers have been widely adopted for image restoration. However, the training of these methods is often unstable and time-consuming. Furthermore, when used for object detection (OD), the output images generated by these methods may provide unsatisfactory results despite image clarity. In this work, we propose a contrastive approach towards mitigating this problem, by evaluating images generated by restoration models during and post training. This approach leverages OD scores combined with attention maps for predicting the usefulness of restored images for the OD task. We conduct experiments using two novel use-cases of conditional GANs and two transformer methods that probe the robustness of the proposed approach on multi-weather corruptions in the OD task. Our approach achieves an averaged 178 percent increase in mAP between the input and restored images under adverse weather conditions like dust tornadoes and snowfall. We report unique cases where greater denoising does not improve OD performance and conversely where noisy generated images demonstrate good results. We conclude the need for explainability frameworks to bridge the gap between human and machine perception, especially in the context of robust object detection for autonomous vehicles.