论文标题

面向目标网络的端到端集成计算和通信体系结构:实时监视视频的视角

An End-to-End Integrated Computation and Communication Architecture for Goal-oriented Networking: A Perspective on Live Surveillance Video

论文作者

Batabyal, Suvadip, Ercetin, Ozgur

论文摘要

实时视频监视已成为智能城市的至关重要技术,通过大规模的移动和固定摄像机的大规模部署使得成为可能。在本文中,我们提出了感知的流媒体,以实时识别来自源的实时馈送的重要事件,而不是基于云的分析。为此,我们首先确定包含特定情况的框架,并为它们分配高规模的重要性(SI)。该标识是使用微小的神经网络(具有少量隐藏层)在源源进行的,该网络以准确性为代价而产生了少量的计算资源。然后将具有高Si值的帧用一定的信号到噪声(SNR)进行流,以保留框架质量,而其余的框架则使用小SNR传输。然后,使用深层神经网络(带有许多隐藏层)对接收的帧进行分析,以准确提取情况。我们表明,提出的方案能够在2160p(UHD)视频中将发射器的所需功耗降低38.5%,而在给定情况下则达到了97.5%的分类精度。

Real-time video surveillance has become a crucial technology for smart cities, made possible through the large-scale deployment of mobile and fixed video cameras. In this paper, we propose situation-aware streaming, for real-time identification of important events from live-feeds at the source rather than a cloud based analysis. For this, we first identify the frames containing a specific situation and assign them a high scale-of-importance (SI). The identification is made at the source using a tiny neural network (having a small number of hidden layers), which incurs a small computational resource, albeit at the cost of accuracy. The frames with a high SI value are then streamed with a certain required Signal-to-Noise-Ratio (SNR) to retain the frame quality, while the remaining ones are transmitted with a small SNR. The received frames are then analyzed using a deep neural network (with many hidden layers) to extract the situation accurately. We show that the proposed scheme is able to reduce the required power consumption of the transmitter by 38.5% for 2160p (UHD) video, while achieving a classification accuracy of 97.5%, for the given situation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源