论文标题
可解释的深泡和欺骗检测:使用Shapley添加说明的攻击分析
Explainable deepfake and spoofing detection: an attack analysis using SHapley Additive exPlanations
论文作者
论文摘要
尽管对自动扬声器验证的Deepfake和欺骗检测进行了数年的研究,但对分类器用来区分真正的和欺骗话语的人工制品知之甚少。对这些的理解对于可信赖的,可解释的解决方案的设计至关重要。在本文中,我们报告了我们以前的工作的扩展,以更好地了解分类器行为,以使用Shapley添加说明(SHAP)来攻击分析。我们的目标是确定表征由不同攻击算法产生的话语的人工制品。使用一对在原始波形或幅度谱图上运行的分类器,我们表明形状结果的可视化可用于识别特定的攻击特异性人工制品以及合成语音和转换语音欺骗攻击之间的差异和一致性。
Despite several years of research in deepfake and spoofing detection for automatic speaker verification, little is known about the artefacts that classifiers use to distinguish between bona fide and spoofed utterances. An understanding of these is crucial to the design of trustworthy, explainable solutions. In this paper we report an extension of our previous work to better understand classifier behaviour to the use of SHapley Additive exPlanations (SHAP) to attack analysis. Our goal is to identify the artefacts that characterise utterances generated by different attacks algorithms. Using a pair of classifiers which operate either upon raw waveforms or magnitude spectrograms, we show that visualisations of SHAP results can be used to identify attack-specific artefacts and the differences and consistencies between synthetic speech and converted voice spoofing attacks.