论文标题

基于注意的在线语音识别的传感器

Attention-based Transducer for Online Speech Recognition

论文作者

Wang, Bin, Yin, Yan, Lin, Hui

论文摘要

最近的研究揭示了复发性神经网络传感器(RNN-T)对端到端(E2E)语音识别的潜力。在一些最受欢迎的E2E系统中,包括RNN-T,注意编码器 - 编码器(AED)和连接派时间分类(CTC),RNN-T具有明显的优势,因为它支持流识别识别并且没有框架独立的假设。尽管RNN-T研究已经取得了重大进展,但在训练速度和准确性方面,它仍然面临着绩效挑战。我们提出了基于注意力的传感器,并在两个方面对RNN-T进行了修改。首先,我们在联合网络中引入了块的关注。其次,在编码器中引入了自我注意力。我们提出的模型的训练速度和准确性都优于RNN-T。对于培训,我们达到了1.7倍的速度。在500小时LAIX非本地英语培训数据中,基于注意力的传感器比基线RNN-T降低了约10.6%的降低。经过全套超过10k小时的数据培训,我们的最终系统比最佳的Kaldi TDNN-F食谱培训了约5.5%。在没有降解的情况下进行8位重量量化后,RTF和潜伏期在生产服务器的单个CPU核心上分别下降到0.34〜0.36和268〜409毫秒。

Recent studies reveal the potential of recurrent neural network transducer (RNN-T) for end-to-end (E2E) speech recognition. Among some most popular E2E systems including RNN-T, Attention Encoder-Decoder (AED), and Connectionist Temporal Classification (CTC), RNN-T has some clear advantages given that it supports streaming recognition and does not have frame-independency assumption. Although significant progresses have been made for RNN-T research, it is still facing performance challenges in terms of training speed and accuracy. We propose attention-based transducer with modification over RNN-T in two aspects. First, we introduce chunk-wise attention in the joint network. Second, self-attention is introduced in the encoder. Our proposed model outperforms RNN-T for both training speed and accuracy. For training, we achieves over 1.7x speedup. With 500 hours LAIX non-native English training data, attention-based transducer yields ~10.6% WER reduction over baseline RNN-T. Trained with full set of over 10K hours data, our final system achieves ~5.5% WER reduction over that trained with the best Kaldi TDNN-f recipe. After 8-bit weight quantization without WER degradation, RTF and latency drop to 0.34~0.36 and 268~409 milliseconds respectively on a single CPU core of a production server.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源