论文标题
基于中央差异网络的多模式面对抗旋转
Multi-Modal Face Anti-Spoofing Based on Central Difference Networks
论文作者
论文摘要
面部反欺骗(FAS)在保护面部识别系统免于演示攻击方面起着至关重要的作用。现有的多模式FAS方法依赖于堆叠的香草卷积,这在描述来自模态的详细固有信息方面很弱,并且在域移动时很容易无效(例如,交叉攻击和跨种族)。在本文中,我们将中央差异卷积网络(CDCN)\ Cite {Yu2020-Searching}扩展到多模式版本,以捕获三种模态(RGB,DEPTH和INDRARED)之间的固有欺骗模式。同时,我们还对单模式的CDCN进行了精心的研究。我们的方法赢得了“轨道多模式”的第一名,以及Chalearn Face Anti-face Anti-face攻击检测挑战@CVPR2020 \ Cite {Liu202020 Cross}的“轨道单模式(RGB)”的第二名。我们的最终提交内容分别在“轨道多模式”和“轨道单模式(RGB)”中获得1.02 $ \ pm $ 0.59 \%和4.84 $ \ pm $ 1.79 \%acer。这些代码可在{https://github.com/zitongyu/cdcn}上找到。
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks. Existing multi-modal FAS methods rely on stacked vanilla convolutions, which is weak in describing detailed intrinsic information from modalities and easily being ineffective when the domain shifts (e.g., cross attack and cross ethnicity). In this paper, we extend the central difference convolutional networks (CDCN) \cite{yu2020searching} to a multi-modal version, intending to capture intrinsic spoofing patterns among three modalities (RGB, depth and infrared). Meanwhile, we also give an elaborate study about single-modal based CDCN. Our approach won the first place in "Track Multi-Modal" as well as the second place in "Track Single-Modal (RGB)" of ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2020 \cite{liu2020cross}. Our final submission obtains 1.02$\pm$0.59\% and 4.84$\pm$1.79\% ACER in "Track Multi-Modal" and "Track Single-Modal (RGB)", respectively. The codes are available at{https://github.com/ZitongYu/CDCN}.