Physound

PhySound

EU H2020 Marie Sklodowska-Curie Individual Global Fellowship

Contact: gabriel.cirio [at] gmail [dot] com

 


About

Participating institutions: Inria and Columbia University
PI: Gabriel Cirio
Co-PI: George Drettakis, Changxi Zheng, Eitan Grinspun
Sound is as important as visuals in modern media such as movies and video games. Yet, relatively little effort has been devoted to the rendering of sound from digital environments, compared to the phenomenal advances of visual rendering. While sophisticated light transport algorithms allow the photorealistic rendering of a 3D scene, sound must be added through the ad-hoc edition of recorded sounds and their manual synchronization with the visuals, yielding limited and repetitive sounds. The PhySound project addresses this problem by generating sounds from virtual environments through physically based simulation, greatly simplifying the creation of sound content, allowing perfect synchronization with the visuals, and avoiding recordings that are sometimes slow, expensive or just impossible to capture. The project focuses on a challenging family of objects: thin shells. Thin shells are notoriously difficult to simulate due to their complex vibrations, often leading to computationally expensive chaotic regimes. Familiar thin shell sounds include crumpling paper and soda cans, striking plastic bottles and metal slabs, or playing instruments such as cymbals and gongs.This project aims at digitally reproducing the main sources of thin shells sound: frictional contact, buckling/crumpling, and turbulence, all very distinct and characteristic sounds. The key challenge is computation time, with traditional techniques yielding accurate but prohibitively slow algorithms. We aim at making simulations tractable first, and real time afterwards.This project considerably widens the number of real life object sounds that can be digitally generated, and contributes to the young research field of physically based sound rendering, which has the potential of becoming the next key technology of the media industry and revolutionize the way we create content just like graphics rendering did in the past. This project provides automatic sound content creation algorithms for better media production, and faster content creation cycles by avoiding recordings and visual synchronization. In addition, the project is expected to provide insight into the physical mechanisms that produce sound, which can be of interest to other fields beyond Computer Graphics.


Publications

 

Multi-Scale Simulation of Nonlinear Thin-Shell Sound with Wave Turbulence

Gabriel Cirio, Ante Qu, George Drettakis, Eitan Grinspun, Changxi Zheng

ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), 37(4), 2018

[Project Webpage]


Code

To be released soon…

Comments are closed.