论文标题
动态外观:通过联合培训进行动作识别的视频表示
Dynamic Appearance: A Video Representation for Action Recognition with Joint Training
论文作者
论文摘要
视频的静态外观可能会阻碍深度神经网络在视频动作识别中学习相关功能的能力。在本文中,我们介绍了一个新概念,动态外观(DA),总结了视频中与移动有关的外观信息,同时滤除与运动无关的静态信息。我们认为将动态外观从原始视频数据中提取为有效的视频理解。为此,我们提出了像素的时间投影(PWTP),该投影将视频的静态外观投射到其原始矢量空间内的子空间中,而动态外观则在描述特殊运动模式的投影残差中编码。此外,我们将PWTP模块与CNN或变压器集成到端到端训练框架中,该框架通过使用多目标优化算法进行了优化。我们为四个动作识别基准:Kinetics400,Some Sopeing V1,UCF101和HMDB51提供了广泛的实验结果。
Static appearance of video may impede the ability of a deep neural network to learn motion-relevant features in video action recognition. In this paper, we introduce a new concept, Dynamic Appearance (DA), summarizing the appearance information relating to movement in a video while filtering out the static information considered unrelated to motion. We consider distilling the dynamic appearance from raw video data as a means of efficient video understanding. To this end, we propose the Pixel-Wise Temporal Projection (PWTP), which projects the static appearance of a video into a subspace within its original vector space, while the dynamic appearance is encoded in the projection residual describing a special motion pattern. Moreover, we integrate the PWTP module with a CNN or Transformer into an end-to-end training framework, which is optimized by utilizing multi-objective optimization algorithms. We provide extensive experimental results on four action recognition benchmarks: Kinetics400, Something-Something V1, UCF101 and HMDB51.