论文标题

使用生成先验的无监督音源分离

Unsupervised Audio Source Separation using Generative Priors

论文作者

Narayanaswamy, Vivek, Thiagarajan, Jayaraman J., Anirudh, Rushil, Spanias, Andreas

论文摘要

最先进的音频源分离系统依赖于在时间或光谱域运行的精心量身定制的神经网络体系结构的监督端端培训。但是,这些方法在需要访问昂贵的源级别标记的数据以及特定于给定的一组源和混合过程方面受到严重挑战,当这些假设发生变化时,这需要完全重新训练。这强调了对可以利用数据驱动建模的最新进展的无监督方法的需求,并通过有意义的先验弥补了缺乏标记的数据。为此,我们提出了一种基于对单个来源训练的生成先验的音频源分离的新方法。通过使用投影梯度下降优化,我们的方法同时在特定于源的潜在空间中进行搜索,以有效地恢复组成源。尽管可以直接在时域中定义生成先验,例如Wavegan,我们发现,使用光谱域损耗函数进行优化会导致质量质量估计。我们对标准口语数字和仪器数据集的经验研究清楚地证明了我们方法对经典和最先进的无监督基线的有效性。

State-of-the-art under-determined audio source separation systems rely on supervised end-end training of carefully tailored neural network architectures operating either in the time or the spectral domain. However, these methods are severely challenged in terms of requiring access to expensive source level labeled data and being specific to a given set of sources and the mixing process, which demands complete re-training when those assumptions change. This strongly emphasizes the need for unsupervised methods that can leverage the recent advances in data-driven modeling, and compensate for the lack of labeled data through meaningful priors. To this end, we propose a novel approach for audio source separation based on generative priors trained on individual sources. Through the use of projected gradient descent optimization, our approach simultaneously searches in the source-specific latent spaces to effectively recover the constituent sources. Though the generative priors can be defined in the time domain directly, e.g. WaveGAN, we find that using spectral domain loss functions for our optimization leads to good-quality source estimates. Our empirical studies on standard spoken digit and instrument datasets clearly demonstrate the effectiveness of our approach over classical as well as state-of-the-art unsupervised baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源