↑ Return to Research

Projects

Partnerships and Cooperations

National Initiatives

FUI voiceHome: Voice control for smart home and multimedia appliances

Duration: February 2015 – January 2018
Coordinator: onMobile SA – Other partners: Delta Dore SA, Technicolor Connected Home Rennes SNC, Orange SA, eSoftThings SAS, Inria, IRISA, LOUSTIC
The goal of the project is to conceive and implement a natural language voice interface for smart home and multimedia (set-top-box) appliances. Inria is responsible for the robust recognition of spoken commands.

ANR DYCI2: Creative dynamics of improvised interaction

Duration: October 2014 – September 2018
Coordinator: Gérard Assayag (Ircam) – Other partners: Inria, University of Western Brittany
The project involves the creation, adaptation and implementation of effective and efficient models of artificial listening, machine learning, interaction and on-line creation of musical content, to enable the establishment of digital musical agents. These autonomous and creative agents will be able to integrate in an artistically credible way diverse human settings such as live collective performance, (post-)production, pedagogy.
Project website >>

ANR-DFG IFCASL: Individualized feedback in computer-assisted spoken language learning

Duration: March 2013 – February 2016
Coordinator: Jürgen Trouvain (Saarland University) – Other partners: Saarland University (COLI department)
The main objective of IFCASL is to investigate learning of oral French by German speakers, and oral German by French speakers at the phonetic level. The work has mainly focused on the design of a corpus of French sentences read by German speakers learning French, a corpus of German sentences read by French speakers, and tools for annotating French and German corpora.
Project website >>

ANR ContNomina: Exploitation of context for proper names recognition in diachronic audio documents

Duration: February 2013 – July 2016
Coordinator: Irina Illina (LORIA) – Other partners: LIA, Synalp
The project ContNomina focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents.

ANR ORFEO: Outils et Ressources pour le Français Ecrit et Oral

Duration: February 2013 – February 2016
Coordinator: Jeanne-Marie Debaisieux (Université Paris 3) – Other partners: ATILF, CLLE-ERSS, ICAR, LIF, LORIA, LATTICE, MoDyCo
The main objective of the ORFEO project is the constitution of a corpus for the study of contemporary French.
Project website >>

ADT VisArtico: Inria Technological Development Action Visartico

Duration: 2013 – 2016
The Inria Technological Development Action (ADT) Visartico (2013–2015) aims at developing and improving VisArtico, an articulatory vizualisation software. In addition to improving the basic functionalities, several articulatory analysis and processing tools are being integrated. We will also work on the integration of multimodal data.

Equipex ORTOLANG: Open Resources and Tools for Language

Duration: September 2012 – May 2016
Coordinator: Jean-Marie Pierrel (ATILF) – Other partners: LPL, LORIA, Modyco, LLL, INIST
Project website >>

FUI RAPSODIE: Automatic Speech Recognition for Hard of Hearing or Handicapped People

Duration: March 2012 – February 2016
Coordinator: eRocca SAS – Other partners: CEA Grenoble, Inria, Castorama SA
The goal of the project is to realize a portable device that will help a hard of hearing person to communicate with other people.
Project website >>

ADT FASST: Flexible Audio Source Separation Toolbox

Duration: December 2012 – November 2014
The Inria Technological Development Action FASST (2012–2014) was conducted by MULTISPEECH in collaboration with the teams PANAMA and TEXMEX of Inria Rennes. It reimplemented into efficient C++ code the Flexible Audio Source Separation Toolbox (FASST) originally developed in Matlab by the METISS team of Inria Rennes. This enabled the application of FASST on larger data sets, and its use by a larger audience. The new C++ version was released in January 2014. Two modules were also developed for HTK and Kaldi in order to perform noise robust speech recognition by uncertainty decoding.
Software website >>

International Initiatives

i3DMusic: Real-time Interactive 3D Rendering of Musical Recordings

Duration: October 2010 to March 2014
Coordinator: Audionamix (FR) – Other partners: Inria, EPFL (CH), Sonic Emotion (CH)
The i3DMusic project aims to enable real-time interactive respatialization of mono or stereo music content. This is achieved through the combination of source separation and 3D audio rendering techniques. Inria is responsible for the source separation work package, more precisely for designing scalable online source separation algorithms and estimating advanced spatial parameters from the available mixture.
Project website >>

Bilateral Contracts with Industry

Contract with Studio MAIA

Duration: September 2014 – August 2015
Supported by Bpifrance
A pre-study contract was signed to investigate speech processing tools that could eventually be transferred as plugins for audio mixing software. Prosody modification, noise reduction, and voice conversion are of special interest.

Contract with Venatech SAS

Duration: June 2014 – August 2017
Supported by Bpifrance
The project aims to design a real-time control system for wind farms that will maximize energy production while limiting sound nuisance. This will leverage our know-how on audio source separation and uncertainty modeling and propagation.