论文标题

多式CNN用于鲁棒的声学建模

Multistream CNN for Robust Acoustic Modeling

论文作者

Han, Kyu J., Pan, Jing, Tadala, Venkata Krishna Naveen, Ma, Tao, Povey, Dan

论文摘要

本文提出了Multistream CNN,这是一种新型的神经网络结构,用于语音识别任务中的强大声学建模。提出的架构过程输入了具有不同时间分辨率的语音,通过将不同的扩张速率应用于跨多个流的卷积神经网络以实现鲁棒性。扩张速率是从3帧的子采样率的倍数中选择的。每个流堆叠TDNN-F层(1D CNN的变体),并将来自流的向量嵌入向量,然后将其串联到最后一层。我们通过在各种数据集中对Kaldi最佳TDNN-F模型显示出一致的改进来验证拟议的Multistream CNN体系结构的有效性。 Multistream CNN将Librispeech语料库中的测试中设置的WER提高了12%(相对)。在ASAPP生产ASR系统的自定义数据上,它记录了客户频道音频的相对提高11%,以证明其对野外数据的稳健性。在实时因素方面,多式CNN的表现优于基线TDNN-F 15%,这也表明其在生产系统上的实用性。当与自动竞争的SRU LM撤退相结合时,Multistream CNN为ASAPP做出了贡献,可以在LibrisPeech的测试清洁中获得1.75%的最佳WER。

This paper proposes multistream CNN, a novel neural network architecture for robust acoustic modeling in speech recognition tasks. The proposed architecture processes input speech with diverse temporal resolutions by applying different dilation rates to convolutional neural networks across multiple streams to achieve the robustness. The dilation rates are selected from the multiples of a sub-sampling rate of 3 frames. Each stream stacks TDNN-F layers (a variant of 1D CNN), and output embedding vectors from the streams are concatenated then projected to the final layer. We validate the effectiveness of the proposed multistream CNN architecture by showing consistent improvements against Kaldi's best TDNN-F model across various data sets. Multistream CNN improves the WER of the test-other set in the LibriSpeech corpus by 12% (relative). On custom data from ASAPP's production ASR system for a contact center, it records a relative WER improvement of 11% for customer channel audio to prove its robustness to data in the wild. In terms of real-time factor, multistream CNN outperforms the baseline TDNN-F by 15%, which also suggests its practicality on production systems. When combined with self-attentive SRU LM rescoring, multistream CNN contributes for ASAPP to achieve the best WER of 1.75% on test-clean in LibriSpeech.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源