Game of Drones: localizing a drone with audio (teaser video)

World leading CEDAR Audio ships improved audio desaturation based on PANAMA’s Sparse Audio Declipper technology

The new audio desaturation module of CEDAR Audio, world-leader provider of professional audio restoration solutions,  is based on further developments of PANAMA’s Sparse Audio Declipper.

Click here to learn more on the Sparse Audio Declipper: read the papers where it is described and tested; download the corresponding Matlab code; listen to audio examples; test it on your own files from your browser.

Release: A-SPADE, the Sparse Audio Declipper

Ever faced saturated audio recordings ? Try A-SPADE, the Sparse Audio Declipper.

Clipping, also known as saturation, is a comon phenomenon leading to sometimes seriously distorted audio recordings. Declipping consists in performing the inverse process, to restore saturated audio recordings and improve their quality. A-SPADE is a declipping algorithm developed by PANAMA. It is based on on the expression of declipping as a linear inverse problem and the use of analysis sparse (aka cosparse) regularization in the time-frequency domain.

You can listen to sound examples or try it on your own files in your web browser using A-SPADE’s online implementation on A||GO, Inria’s web platform for technology demonstration.

Code release: PHYSALIS (Physics-Driven Cosparse Analysis) and CBC4CS (Convex Blind Calibration for Compressed Sensing)

The PHYSALIS Matlab code allows to reproduce the experiments from a series of papers from our group on cosparse (=analysis flavor of sparse) source localization, for both acoustic inverse problems and EEG inverse problems. It is available here

The CBC4CS code allows to reproduce convex blind calibration experiments in the context of compressive sensing. It is available here.

2017 EURASIP Best Paper Award

Our joint paper published in EURASIP Journal on Advances in Signal Processing has been awarded a 2017 EURASIP best paper award

Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques
Gilles Puy, Pierre Vandergheynst, Rémi Gribonval and Yves Wiaux

The award ceremony will take place at EUSIPCO 2017 in Greece. For more information see the EURASIP newsletter or the web site of the  EURASIP Journal on Advances in Signal Processing.

Preprint + code on Compressive K-Means

We have a new preprint on Compressive K-Means.

Read it on https://hal.inria.fr/hal-01386077

Download the code at http://sketchml.gforge.inria.fr/

(closed) PhD offer – Interactive Navigation for a Video Audio True Experience @ PANAMA, Inria Rennes

Details and appliication here

Paper + code on random sampling of bandlimited signals on graphs

Our paper on Random Sampling of Bandlimited Signals on Graphs has been accepted for publication in Applied and Computational Harmonic Analysis .

Read it on arXiv:1511.05118 or hal:hal-01320214

Download the code at http://grsamplingbox.gforge.inria.fr/

2016 Award for Outstanding Contributions in Neural Systems

Congratulations to Antoine Deleforge (new PANAMA team member), Florence Forbes (MISTIS team) and Radu Horaud (PERCEPTION team) who received the 2016 Hojjat Adeli Award for Outstanding Contributions in Neural Systems for their paper:

  • A. Deleforge, F. Forbes, and R. Horaud (2015), “Acoustic Space Learning for Sound-source Separation and Localization on Binaural Manifolds,” International Journal of Neural Systems, 25:1, 1440003 (21 pages)

The Award for Outstanding Contributions in Neural Systems established by World Scientific Publishing Co. in 2010, is awarded annually to the most innovative paper published in the previous volume/year of the International Journal of Neural Systems.

For more information concerning this paper please visit the page of the PERCEPTION team on Acoustic Space Learning on Binaural Manifolds (article download, Matlab code, datasets, etc.)

(closed) PhD offer – Estimating the Geometry of Audio Scenes Using Virtually-Supervised Learning @ Inria Rennes

Details and application here