论文标题

统一者:统一的变压器,用于有效时空表示学习

UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning

论文作者

Li, Kunchang, Wang, Yali, Gao, Peng, Song, Guanglu, Liu, Yu, Li, Hongsheng, Qiao, Yu

论文摘要

从高维视频中学习丰富而多尺度的时空语义是一项具有挑战性的任务,这是由于视频框架之间的大量局部冗余和复杂的全球依赖性,这是一项艰巨的任务。这项研究的最新进展主要是由3D卷积神经网络和视觉变压器驱动的。尽管3D卷积可以有效地汇总局部环境以抑制3D小社区的局部冗余,但由于接受场有限,它缺乏捕获全球依赖性的能力。另外,视觉变压器可以通过自我注意的机制有效地捕获远程依赖性,同时对每一层中所有令牌的盲目相似性比较的限制限制了局部冗余。基于这些观察结果,我们提出了一种新型的统一变压器(统一器),该变压器(均匀者)以简洁的变压器格式无缝地整合了3D卷积和时空自我关注的优点,并在计算和准确性之间实现了可取的平衡。与传统变压器不同,我们的关系聚合器可以通过分别在浅层和深层学习本地和全球令牌亲和力来解决时空冗余和依赖。我们在流行的视频基准测试中进行了广泛的实验,例如Kinetics-400,Kinetics-600和Sopithing V1&V2。只有Imagenet-1k预处理,我们的统一机在Kinetics-400/Kinetics-600上获得了82.9%/84.8%的TOP-1准确性,而比其他最先进的方法要少10倍。对于某些事物的V1和V2,我们的统一员分别实现了60.9%和71.2%Top-1精度的新最新表现。代码可从https://github.com/sense-x/uniformer获得。

It is a challenging task to learn rich and multi-scale spatiotemporal semantics from high-dimensional videos, due to large local redundancy and complex global dependency between video frames. The recent advances in this research have been mainly driven by 3D convolutional neural networks and vision transformers. Although 3D convolution can efficiently aggregate local context to suppress local redundancy from a small 3D neighborhood, it lacks the capability to capture global dependency because of the limited receptive field. Alternatively, vision transformers can effectively capture long-range dependency by self-attention mechanism, while having the limitation on reducing local redundancy with blind similarity comparison among all the tokens in each layer. Based on these observations, we propose a novel Unified transFormer (UniFormer) which seamlessly integrates merits of 3D convolution and spatiotemporal self-attention in a concise transformer format, and achieves a preferable balance between computation and accuracy. Different from traditional transformers, our relation aggregator can tackle both spatiotemporal redundancy and dependency, by learning local and global token affinity respectively in shallow and deep layers. We conduct extensive experiments on the popular video benchmarks, e.g., Kinetics-400, Kinetics-600, and Something-Something V1&V2. With only ImageNet-1K pretraining, our UniFormer achieves 82.9%/84.8% top-1 accuracy on Kinetics-400/Kinetics-600, while requiring 10x fewer GFLOPs than other state-of-the-art methods. For Something-Something V1 and V2, our UniFormer achieves new state-of-the-art performances of 60.9% and 71.2% top-1 accuracy respectively. Code is available at https://github.com/Sense-X/UniFormer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源