论文标题
您需要注意的是语音分离所需的一切
Attention is All You Need in Speech Separation
论文作者
论文摘要
经常性神经网络(RNN)长期以来一直是序列到序列学习中的主要结构。但是,RNN是固有的顺序模型,不允许其计算并行化。变形金刚作为标准RNN的天然替代品出现,用多头注意机制代替了经常性计算。在本文中,我们提出了Sepformer,这是一种新型的无RNN变压器神经网络,用于语音分离。分离器通过采用变压器的多尺度方法来学习短期和长期的依赖性。所提出的模型在标准WSJ0-2/3MIX数据集上实现了最新的(SOTA)性能。在WSJ0-2MIX上达到22.3 dB的Si-SnRi,在WSJ0-3MIX上达到了19.5 dB的Si-Snri。隔离器继承了变压器的并行优势,即使将编码表示的代表降低了8倍,也可以实现竞争性能。因此,它的速度明显更快,并且比具有可比性能的最新语音分离系统要比最新的语音分离系统更少。
Recurrent Neural Networks (RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations. Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism. In this paper, we propose the SepFormer, a novel RNN-free Transformer-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. The proposed model achieves state-of-the-art (SOTA) performance on the standard WSJ0-2/3mix datasets. It reaches an SI-SNRi of 22.3 dB on WSJ0-2mix and an SI-SNRi of 19.5 dB on WSJ0-3mix. The SepFormer inherits the parallelization advantages of Transformers and achieves a competitive performance even when downsampling the encoded representation by a factor of 8. It is thus significantly faster and it is less memory-demanding than the latest speech separation systems with comparable performance.