Multichannel audio source separation with deep neural networks

SpeakerAditya Arie Nugraha (PhD student)

Date: November 26, 2015

Abstract:

Many studies have shown that the use of deep neural network (DNN) for audio source separation is extremely promising. However, most studies addressed the problem of single-channel source separation and existing literature lacks a framework to exploit DNN for multichannel audio source separation. In this talk, I will present my work on a DNN-based multichannel source separation framework, where the sources spectra are estimated using DNNs and used to derive a multichannel filter using an iterative expectation-maximization (EM) algorithm, in which spatial covariance matrices are used to provide the spatial information. Experiments have been designed to study the impact of different design choices on the performance of the proposed framework, including the cost functions for the training of DNNs, the number of EM iterations, and the number of DNNs. Several experimental results will be presented.