论文标题

返回事件基础:通过光度恒定的事件摄像机的图像重建的自我监督学习

Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy

论文作者

Paredes-Vallés, F., de Croon, G. C. H. E.

论文摘要

事件摄像机是新型视觉传感器,以异步的方式进行采样,亮度增量低和时间分辨率高。最终的事件流本身具有很高的价值,尤其是对于高速运动估计。但是,越来越多的作品还集中在事件中强度框架的重建上,因为这允许与现有的有关外观和基于框架的计算机视觉的文献弥合差距。最近的工作主要使用了培训的合成,地面数据的神经网络解决了这个问题。在这项工作中,我们第一次从自我监督的学习角度解决了强度重建问题。我们的方法利用事件摄像机的内部运作的知识,将估计的光流和基于事件的光度恒定恒定结合起来训练神经网络,而无需任何地面真相或合成数据。多个数据集的结果表明,提出的自我监督方法的性能与最新的方法一致。此外,我们提出了一个新型的轻型神经网络,用于光流估计,该网络仅略有下降,可实现高速推理。

Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution. The resulting streams of events are of high value by themselves, especially for high speed motion estimation. However, a growing body of work has also focused on the reconstruction of intensity frames from the events, as this allows bridging the gap with the existing literature on appearance- and frame-based computer vision. Recent work has mostly approached this problem using neural networks trained with synthetic, ground-truth data. In this work we approach, for the first time, the intensity reconstruction problem from a self-supervised learning perspective. Our method, which leverages the knowledge of the inner workings of event cameras, combines estimated optical flow and the event-based photometric constancy to train neural networks without the need for any ground-truth or synthetic data. Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art. Additionally, we propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源