Speaker: Dayana Ribas
Date: November 3, 2016
Currently there is an increasing interest in the development of technologies that integrate biometric systems due to its wide use in applications where the identification of individuals is required. In this context, Automatic Speaker Recognition Systems are in high demand from both commercial to security applications, and therefore its has been the focus of several research efforts in last decades. Since early 90’s, the probabilistic nature of Gaussian Mixture Models (GMM) has been used for representing the large variability of speech data through an unsupervised modeling of acoustic classes. Later, in order to handle robustness against session variabilities, the Factor Analysis model was introduced giving rise to the current i-vector framework. Recently, the great increase of performance obtained with Deep Neural Networks (DNN) in Speech Recognition applications has motivated the introduction of DNN methods in speaker recognition. However, despite the great amount of efforts on developing this topic, there are still big challenges in order to get highly accurate systems for use in practical scenarios. This talk will go over the current challenges of speaker recognition and the recent developments in order to address them. I will also present some results from my previous work on noise compensation methods for speaker recognition, including Multicondition Training and Uncertainty Propagation, as well as the current research directions and some future perspectives.