To do : link to project page



Expressive restoration of Gallo­-Roman statues by virtual sculpture and animation

The eRoma project aims at revisiting the digitization and virtual restoration of archaeological and fine arts artefacts by taking advantage of the sites from which they were retrieved and the eras they belong to. To do so, e-Roma will develop a new virtual representation both versatile and unified enough to be used for both restoration and animation of digitized artworks. Traditional cardboard models with a fixed and rigid representation will therefore be replaced by interactive dynamic virtual prototypes, to help restore statues and illustrate changes over time.

A first research axis of e-ROMA is to revisit the pipeline of 3D scanning and modelling of art-work, by focusing on accuracy, plausibility and interactivity. This could be associated with real opportunities for restoration and virtual restitution. Breaking with traditional approaches that proceeds from a reconstructed model and forget the measured data, we propose to always exploit the initial measurements from the initial reconstruction to the restoration and virtual sculpture steps. However, the restoration process does not rely on a single statue but also on other existing archaeological remains and historical knowledge. That is why we propose to take advantage of existing similar art-works, but also to bring additional knowledge about human anatomy, behaviour of draped fabric, or the history of sculpture ideals, as well as gestures and carving tools at the time. This activity of virtual restoration and restitution should finally be performed by restorers consistently to what they would do in a non virtual restoration environment. Finally the concept of traceability will be crucial to distinguish between the actual remain and the restoration or creation process.

Centuries of degradation have damaged or even destroyed several statues. From the huge variety of sculptures only a few have come down through the centuries. To bring back this variety, our second research axis aims at generating new statues corresponding to new poses, new faces, and even different occupations or social positions, so as to complement and give meaning to various architectural elements found in excavations. Once again the creation process will remain within the framework of plausible assumptions. e-ROMA finally addresses the problem of generating animated sequences of statues, particularly interesting in the context of promotion and presentation of restitutions for the general public.

e-ROMA brings together researchers from the GeoMod team (LIRIS) in Lyon and IMAGINE (INRIA-LJK) in Grenoble on the theme of cultural heritage, in a project at the crossroads of their respective expertise in computer graphics, while promoting the archaeological wealth of the Rhône-Alpes region. This project will also be undertaken in partnership with the Gallo-Roman Museum of Lyon-Fourvière and historians from Paris-Sorbonne. Indeed, the restoration and restitution of statues is particularly important for the Museum Lyon-Fourvière, since it holds a large collection of fragmentary stone reliefs, reflecting the high degree of romanisation that characterized Lugdunum. The museum now wants to take advantage of the digital revolution, to help in the reconstruction of a number of them, and imagine the ornamentation of its architectural elements with statues that have now disappeared. The historian of Paris-Sorbonne will be able to share with the entire team the many progresses made for the past twenty-five years, in the knowledge of the art-works of the museum, and more generally of the statuary of imperial times. e-Roma will also involve a professional art-restorer specialized in statues who will be able to guide the model development, by ensuring that it remains coherent with his restoration expertise.

Our partners in this project are

LIRIS – CNRS Laboratoire d’InfoRmatique en Image et Systèmes d’information
Musée Gallo-Romain Musée Gallo-Romain de Lyon-Fourvière
Paris IV Université Paris-Sorbonne

The project started in February 2017 for a duration of 48 mois with a 410 874 € grant from ANR.



Read more about Anatomy2020 from the official web site.

Anatomy2020 aims to develop an innovative educational tool for learning functional anatomy. This platform will integrate the latest work in modeling, graphing and human-machine interaction with advances in cognitive science and education to test scenarios for anatomy. The approach is based on the idea that body movements can facilitate learning by enriching memory traces. This “embodiment” seems to be very relevant for the learning of functional anatomy since the knowledge to be acquired can be related to bodily experiences of the learner.

More specifically, we are working on anatomy content creation. We study the current methods of pedagogy and aim on improving the learning pipeline. Anatomy is a complex, dynamic and highly structural subject to learn. We employ cinematography principles to present anatomy for improved understanding and better long term retention. We are working on creating an automated creating and editing tool for teachers to generate tailor made content for their anatomy courses. The tool will also include a module for students to practice lessons and track their progress.

Our partners in this project are:

GIPSA-lab Laboratoire Grenoble Images Parole Signal Automatique
LIBM Laboratoire interuniversitaire de biologie de la motricité
LIG Laboratoire d’Informatique de Grenoble
TIMC Laboratoire des Techniques de l’Ingénierie Médicale et de la Complexité – Informatique, Mathématiques, Applications – Grenoble
The project started in January 2017 for a duration of 42 months with a 679 991 € grant from ANR.



PERFORMANCE LAB is a multi-disciplinary project funded by IDEX Univ. Grenoble-Alpes to promote interactions between researchers in performing arts, social sciences, and computer science under the banner of “performance as research”. Topics of interest include the digital mediation and documentation of live performances, as well as emerging practices in using the performing arts in other disciplines.

The lab is organized in three work items

Performance as research led by Gabriele Sofia (Litt&Arts), Adriana Diaconu (Pacte), Anne-Laure Amilhat Szary (Pacte). This item aims to promote the concepts and practices of performance as research to researchers in social science and computer science.

Digital dramaturgies led by Julie Valéro (Litt&Arts) and Rémi Ronfard (INRIA/LJK). This item of work examines how computers can be used to teach, document and research the performing arts using advanced video processing and artificial intelligence, to augment the performing arts using computer graphics and virtual reality, and to write for the performing arts using natural user-interactions and programming languages.

Cartographies of movement led by Gretchen Schiller (Litt&Arts) and Lionel Reveret (INRIA/LJK). This items examines different representations and mapping techniques for studying movement at different scales in different disciplines including geography, choreography and urban modeling.

Our partners in this project are: Litt&Arts laboratory at Univ. Grenoble Alpes; Pacte laboraotory at Univ. Grenoble Alpes; ESAD – Ecole Supérieure d’Art et de Design in Grenoble and Valence; Le MAGASIN – Centre National d’Arts et de Cultures in Grenoble; CCN2 – Centre Chorégraphique National in Grenoble; Le Pacifique – Centre de Développement Chorégraphique National in Grenoble; and L’Hexagone – Scène Nationale Arts Sciences in Meylan.

Kino Ai


More info on the Kino Ai project web page.

KINOAI is a two-year projet funded by InriaHub with the goal to provide a web interface to ultra-high definition video recordings of live performances. It is a follow-up to the 2017 project ULTRAHD.

ULTRAHD was focused on building a back-end supporting heavy-duty video processing for tracking and naming actors, and creating cinematographic rushes with different shots sizes (close-shot, medium-shot, long-shot) and shot compositions (one-shots, two-shots, three shots) by simulating the effect of a virtual pan-tilt-zoom-camera driven by artificial ntellligence.

KINOAI will build a front-end to the the ULTRAHD back-end, by providing an intuitive web interface allowing to choose shots and edit them together. The proposed interface is intended to be used by artists, researchers and professors in the performing arts.

Isadora Living Archive project


See the project page here : Isadora Living Archive project

The Isadora Living Archive project is a collaboration between Inria’s teams ANIMA and Ex-Situ, the dancer Elisabeth Schwartz and the MocapLab, with the support of the Centre National de la Danse.

The goal of this project is to create an interactive archive of Isadora Duncan’s choreography with the help of new technologies.



Kino Ai is a joint research project of the IMAGINE team at Inria Grenoble Alpes, and the Performance Lab at Univ. Grenoble Alpes. Following our previous work in “multiclip video editing” and “Split Screen Video Generation”, we are working to provide a user-friendly environment for editing and watching ultra-high definition movies online, with an emphasis on recordings of live performances.

Kino Ai makes it possible to generate movies procedurally from existing footage of live performances.

We use artificial intelligence for tracking and recognizing actors in the video, computing aesthetically pleasing virtual camera movements, generating a variety of interesting cinematographic rushes from a single video source, and editing them together into movies.

Our methods have been extensively tested during the rehearsals of “La fabrique des monstres”, a stage adaptation of Mary Shelley’s Frankenstein which was created by director Jean-Francois Peyret at Theatre Vidy in Lausanne in January 2018.

Three short documentaries using cinematographic rushes created by Kino Ai have been published inline in October 2018.

The next steps in this long-term project will be (1) to make those techniques avaible on-line, using a dedicated movie browser, allowing a remote user to compose complex, dynamaic split-screen or multi-screen compositions along a traditional timeline interface; (2) to develop new algorithms for automatically editing the cinematographic rushes into movies based on a semantic analysis of the play-script, the video recording and the audio track, taking into account user preferences; and (3) to produce more examples of procedurally generated movies useful for teaching and researching the creative process at work in theatre and dance performances and rehearsals.


Vineet Gandhi, Rémi Ronfard, Michael Gleicher. Multi-Clip Video Editing from a Single Viewpoint. CVMP 2014 – European Conference on Visual Media Production, London, United Kingdom, Nov. 2014.

Rémi Ronfard, Benoit Encelle, Nicolas Sauret, Pierre-Antoine Champin, Thomas Steiner, et al.. Capturing and Indexing Rehearsals: The Design and Usage of a Digital Archive of Performing Arts. Digital Heritage, Granada, Spain, Sept. 2015.

Moneish Kumar, Vineet Gandhi, Rémi Ronfard, Michael Gleicher. Zooming On All Actors: Automatic Focus+Context Split Screen Video Generation. Computer Graphics Forum, Wiley, May 2017.

Joseph David. Répétitions en cours. La fabrique des Monstres. Episode 1, Dec. 2017 (In French).

Aude Fourel. Répétitions en cours. La fabrique des monstres. Episode 2 , Feb. 2018 ( in French).

Maella Mickaelle. Répétitions en cours. La fabrique des monstres. Episode 3, June 2018 (in French).