PhD position : Automatic speech recognition for non-native speakers in a noisy environment

Ph.D. position
Starting date: September-October 2023
Duration: 36 months 
Supervisors: Irina Illina, Associate Professor, HDR, Lorraine University, LORIA-INRIA, Multispeech Team, illina@loria.fr, https://members.loria.fr/IIllina/
Emmanuel Vincent, Senior Research Scientist, INRIA, Multispeech Team, emmanuel.vincent@inria.fr
http://members.loria.fr/evincent/
 
Context
When a person has their hands busy performing a task like driving a car or piloting an airplane, voice is a fast and efficient interaction modality. In recent years, end-to-end deep learning based automatic speech recognition (ASR), which optimizes the probability of the output character sequence given an input speech signal, has made great progress [Chan et al., 2016; Baevski et al., 2020; Gulati, et al., 2020]. In aeronautical communications, the English language is most often compulsory. Unfortunately, many pilots are not native English speakers and exhibit an accent which is influenced by the pronunciation mechanisms of their native language. Inside an aircraft cockpit, the non-native voice of the pilots and the surrounding noises are the most difficult challenges to overcome in order to achieve efficient ASR. Non-native speech incurs several challenges [Shi et al., 2021]: incorrect or approximate pronunciations, errors in gender and number agreement, use of non-existent words, missing articles, grammatically incorrect sentences, etc. The acoustic environment adds a disturbing component to the speech signal. Much of the success of speech recognition relies on the ability to take into account different accents and ambient noises in the models used by ASR.
 
Objectives
The recruited person will have to develop methodologies and tools to obtain high-performance non-native automatic speech recognition in the aeronautical context and more specifically in a (noisy) aircraft cockpit.
This project will be based on an end-to-end automatic speech recognition system [Shi et al., 2021] using wav2vec 2.0 [Baevski et al., 2020]. This model is one of the most efficient of the current state of the art. This wav2vec 2.0 model enables self-supervised learning of representations from raw audio data (without transcription).

How to apply: Interested candidates are encouraged to contact Irina Illina (illina@loria.fr) with the required documents (CV, transcripts, motivation letter, and recommendation letters).
Applications will be screened subject to the requirements of the French Directorate General of Armament (DGA).

Requirements & skills:
– MSc/MEng degree in speech/audio processing, computer vision, machine learning, or in a related field,
– ability to work independently as well as in a team,
– solid programming skills (Python, PyTorch), and deep learning knowledge,
– good level of written and spoken English.
 
References
[Baevski et al., 2020] A. Baevski, H. Zhou, A. Mohamed, and M. Auli. Wav2vec 2.0: A framework for self-supervised learning of speech representations, 34th Conference on Neural Information Processing Systems (NeurIPS 2020), 2020.
[Chan et al., 2016] W. Chan, N. Jaitly, Q. Le and O. Vinyals. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2016, pp. 4960-4964, 2016.
[Chorowski et al., 2017] J. Chorowski, N. Jaitly. Towards better decoding and language model integration in sequence to sequence models. Interspeech, 2017.
[Houlsby et al., 2019] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, S. Gelly. Parameter-efficient transfer learning for NLP. International Conference on Machine Learning, PMLR, pp. 2790–2799, 2019.
[Gulati et al., 2020] A. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, and R. Pang. Conformer: Convolution-augmented transformer for speech recognition. Interspeech, 2020.
[Shi et al., 2021] X. Shi, F. Yu, Y. Lu, Y. Liang, Q. Feng, D. Wang, Y. Qian, and L. Xie. The accented english speech recognition challenge 2020: open datasets, tracks, baselines, results and methods. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6918–6922, 2021.