Projects and Fundings

E-ROMA

2017-2020

Expressive restoration of Gallo­-Roman statues by virtual sculpture and animation

The eRoma project aims at revisiting the digitization and virtual restoration of archaeological and fine arts artefacts by taking advantage of the sites from which they were retrieved and the eras they belong to. To do so, e-Roma will develop a new virtual representation both versatile and unified enough to be used for both restoration and animation of digitized artworks. Traditional cardboard models with a fixed and rigid representation will therefore be replaced by interactive dynamic virtual prototypes, to help restore statues and illustrate changes over time.

A first research axis of e-ROMA is to revisit the pipeline of 3D scanning and modelling of art-work, by focusing on accuracy, plausibility and interactivity. This could be associated with real opportunities for restoration and virtual restitution. Breaking with traditional approaches that proceeds from a reconstructed model and forget the measured data, we propose to always exploit the initial measurements from the initial reconstruction to the restoration and virtual sculpture steps. However, the restoration process does not rely on a single statue but also on other existing archaeological remains and historical knowledge. That is why we propose to take advantage of existing similar art-works, but also to bring additional knowledge about human anatomy, behaviour of draped fabric, or the history of sculpture ideals, as well as gestures and carving tools at the time. This activity of virtual restoration and restitution should finally be performed by restorers consistently to what they would do in a non virtual restoration environment. Finally the concept of traceability will be crucial to distinguish between the actual remain and the restoration or creation process.

Centuries of degradation have damaged or even destroyed several statues. From the huge variety of sculptures only a few have come down through the centuries. To bring back this variety, our second research axis aims at generating new statues corresponding to new poses, new faces, and even different occupations or social positions, so as to complement and give meaning to various architectural elements found in excavations. Once again the creation process will remain within the framework of plausible assumptions. e-ROMA finally addresses the problem of generating animated sequences of statues, particularly interesting in the context of promotion and presentation of restitutions for the general public.

e-ROMA brings together researchers from the GeoMod team (LIRIS) in Lyon and IMAGINE (INRIA-LJK) in Grenoble on the theme of cultural heritage, in a project at the crossroads of their respective expertise in computer graphics, while promoting the archaeological wealth of the Rhône-Alpes region. This project will also be undertaken in partnership with the Gallo-Roman Museum of Lyon-Fourvière and historians from Paris-Sorbonne. Indeed, the restoration and restitution of statues is particularly important for the Museum Lyon-Fourvière, since it holds a large collection of fragmentary stone reliefs, reflecting the high degree of romanisation that characterized Lugdunum. The museum now wants to take advantage of the digital revolution, to help in the reconstruction of a number of them, and imagine the ornamentation of its architectural elements with statues that have now disappeared. The historian of Paris-Sorbonne will be able to share with the entire team the many progresses made for the past twenty-five years, in the knowledge of the art-works of the museum, and more generally of the statuary of imperial times. e-Roma will also involve a professional art-restorer specialized in statues who will be able to guide the model development, by ensuring that it remains coherent with his restoration expertise.

Our partners in this project are

  • LIRIS – CNRS Laboratoire d’InfoRmatique en Image et Systèmes d’information
  • Musée Gallo-Romain Musée Gallo-Romain de Lyon-Fourvière
  • Paris IV Université Paris-Sorbonne

 

The project started in February  2017 for a duration of 48 mois with a 410 874 € grant from ANR.

ANATOMY2020

2017-2020

Read more about Anatomy2020 from the official web site.

Anatomy2020 aims to develop an innovative educational tool for learning functional anatomy. This platform will integrate the latest work in modeling, graphing and human-machine interaction with advances in cognitive science and education to test scenarios for anatomy. The approach is based on the idea that body movements can facilitate learning by enriching memory traces. This “embodiment” seems to be very relevant for the learning of functional anatomy since the knowledge to be acquired can be related to bodily experiences of the learner.

More specifically, we are working on anatomy content creation. We study the current methods of pedagogy and aim on improving the learning pipeline. Anatomy is a complex, dynamic and highly structural subject to learn. We employ cinematography principles to present anatomy for improved understanding and better long term retention. We are working on creating an automated creating and editing tool for teachers to generate tailor made content for their anatomy courses. The tool will also include a module for students to practice lessons and track their progress.

Our partners in this project are:

  • Anatoscope ANATOSCOPE
  • GIPSA-lab Laboratoire Grenoble Images Parole Signal Automatique
  • LIBM Laboratoire interuniversitaire de biologie de la motricité
  • LIG Laboratoire d’Informatique de Grenoble
  • TIMC Laboratoire des Techniques de l’Ingénierie Médicale et de la Complexité – Informatique, Mathématiques, Applications – Grenoble

The project started in January  2017 for a duration of  42 months with a 679 991 € grant from ANR.

COLLODI 2

2016-2018

This new FUI project is a collaboration between Teamto, Mercenaries Engineering and Inria with the common goal of developing the next-generation professional 3D computer animation software RUMBA.

More specifically, the task of the IMAGINE team is to implement research results coming from Martin Guay’s thesis on sketch-based posing and animation. Our challenge is to generalize those research results to the case of complex professional rigged characters.

June 2018: Animation Film Festival in Annecy, group demonstration of recent results with TEAMTO and Mercenaries at MIFA. See this link for more information.

April 2018: Middle of Project meeting, Demonstration of sketch-based animation results to ANR in Paris.

September 2017: Test with TeamTo animators in Valence.

July 2017: Demonstration of sketch based posing results to our partners in Paris.

PERFORMANCE LAB

2017-2020

PERFORMANCE LAB is a multi-disciplinary project funded by IDEX Univ. Grenoble-Alpes to promote interactions between researchers in performing arts, social sciences, and computer science under the banner of “performance as research”. Topics of interest include the digital mediation and documentation of live performances, as well as emerging practices in using the performing arts in other disciplines.

The lab is organized in three work items

Performance as research led by Gabriele Sofia (Litt&Arts), Adriana Diaconu (Pacte), Anne-Laure Amilhat Szary (Pacte). This item aims to promote the concepts and practices of performance as research to researchers in social science and computer science.

Digital dramaturgies led by Julie Valéro (Litt&Arts) and Rémi Ronfard (INRIA/LJK). This item of work examines how computers can be used to teach, document and research the performing arts using advanced video processing and artificial intelligence, to augment the performing arts using computer graphics and virtual reality, and to write for the performing arts using natural user-interactions and programming languages.

Cartographies of movement led by Gretchen Schiller (Litt&Arts) and Lionel Reveret (INRIA/LJK). This items examines different representations and mapping techniques for studying movement at different scales in different disciplines including geography, choreography and urban modeling.

Our partners in this project are: Litt&Arts laboratory at Univ. Grenoble Alpes; Pacte laboraotory at Univ. Grenoble Alpes; ESAD – Ecole Supérieure d’Art et de Design in Grenoble and Valence; Le MAGASIN – Centre National d’Arts et de Cultures in Grenoble; CCN2 – Centre Chorégraphique National in Grenoble; Le Pacifique – Centre de Développement Chorégraphique National in Grenoble; and L’Hexagone – Scène Nationale Arts Sciences in Meylan.

Kino Ai

More info on the Kino Ai project web page.

KINOAI is a two-year projet funded by InriaHub with the goal to provide a web interface to ultra-high definition video recordings of live performances. It is a follow-up to the 2017 project ULTRAHD.

ULTRAHD was focused on building a back-end supporting heavy-duty video processing for tracking and naming actors, and creating cinematographic rushes with different shots sizes (close-shot, medium-shot, long-shot) and shot compositions (one-shots, two-shots, three shots) by simulating the effect of a virtual pan-tilt-zoom-camera driven by artificial ntellligence.

KINOAI will build a front-end to the the ULTRAHD back-end, by providing an intuitive web interface allowing to choose shots and edit them together. The proposed interface is intended to be used by artists, researchers and professors in the performing arts.

Past projects

ERC Grant Expressive

(2012-2017)

Despite our great expressive skills, we humans lack an easy way of communicating the 3D shapes we imagine, and even more so when it comes to dynamic shapes. Over centuries humans used drawing and sculpture to convey shapes. These tools require significant expertise and time investment, especially if one aims to describe complex or dynamic shapes. With the advent of virtual environments one would expect digital modelling to replace these traditional tools. Unfortunately, conventional techniques in the area have failed, since even trained computer artists still create with traditional media and only use the computer to reproduce already designed content.

Could digital media be turned into a tool, even more expressive and simpler to use than a pen, to convey and refine both static and dynamic 3D shapes? This is the goal of this project. Achieving it will make shape design directly possible in virtual form, from early drafting to progressive refinement and finalization of an idea. To this end, models for shape and motion need to be totally rethought from a user-centred perspective. Specifically, we propose the new paradigm of responsive 3D shapes – a novel representation separating morphology from isometric embedding – to define high-level, dynamic 3D content that takes form, is refined, moves and deforms based on user intent, expressed through intuitive interaction gestures.

Scientifically, while the problem we address belongs to Computer Graphics, it calls for a new convergence with Algebraic Geometry, Simulation and Human Computer Interaction. In terms of impact, the resulting “expressive virtual pen” for 3D content will not only serve the needs of digital artists, but also of scientists and engineers willing to refine their thoughts by interacting with virtual prototypes of their objects of study, educators and media aiming at quickly conveying their ideas, as well as anyone willing to communicate a 3D shape. This project thus opens up new horizons for science, technology and society.

ULTRAHD

(2017)

ULTRAHD was a project funded by InriHub for creating a video processing pipeline for live performances, including actor tracking, actor recognition, and camera reframing driven by artificial intelligence. The project goal was to provide a reference implementation of Vineet Gandhi’s PhD thesis research results using the open source image compositing framework provided by NATRON. Algorithms were optimized to handle ultra high definition video (4K and beyond). The proposed architecture was tested on several hours or 4K video recorded during the rehearsals of a stage adaptation of Mary Shelley’s Frankenstein by French director Jean-Francois Peyret at Theatre de Vidy in Lausanne, Switzerland. The software is now known as the KINOAI back-end.

Collodi: Intuitive Tools for Animation

2012-2015 contract with 2 industrial partners, TeamTO and Mercenaries Engineering. The goal is to offer a new software solution for the efficient production of computer animation. This will provide:
– classical character design and rigging capabilities
– novel sketch-based tools for more intuitive, productive animation design.
– novel physically-based character deformers
– physically-based animation for hair and cloth

This project has been supported by Cap Digital and Imaginove clusters, and funded by Region Rhône-Alpes and Ile-de-France.

logoImaginovelogoCapDigital

logoARAlogoIleDeFrance

EADS: Idealization of components for structural mechanics

(2011-2014)
Cifre PhD in partnership with EADS IW to generate the shape of mechanical components through dimensional reduction operations as needed for mechanical simulations, e.g. transformations from volume bodies to shells or plates forming surface models, usually non-manifold ones. The topic addressed covers also the shape detail removal process that takes place during the successive phases where subsets of the initial shape are idealized. Mechanical criteria are taken into account that interact with the dimensional reductions and the detail removal processes. The goal is to define the transformation operators such that a large range of mechanical components can be processed as automatically and robustly as possible. Some results from the homology computation topic
may be used in the present context. An ongoing publication should address the description of the various stages of a component shape transformation in the context of assemblies.

HAPTIHAND, technology transfer project

(2012-2013)
The objective is to transfer a device, named HandNavigator, that has been developed in collaboration with Arts et Métiers ParisTech/Institut Image, as add on to the 6D Virtuose haptic device developed by HAPTION. The purpose of the HandNavigator is to monitor the movement of a virtual hand at a relatively detailed scale (movements of fingers and phalanxes), in order to create precise interactions with virtual objects. This includes monitoring the whole Virtuose 6D arm and the HandNavigator in a virtual environment, for typical applications of maintenance simulation and virtual assembly in industry. The project covers the creation of an API coupled
to physical engine to generate and monitor a realistic and intuitive use of the entire device, a research study about the optimal use of the device as well as a project management task.

BQR Intuactive

(2011-2014)
The goal of this project is to develop and compare 2D vs 3D interaction for selecting, placing and editing 3D shapes.

LIMA 2

(2007-2013)
LIMA 2 (Loisirs et Images) is a Rhône-Alpes project in the ISLE cluster (Informatique, Signal, Logiciel Embarqué) focussed on classification and computer graphics. This project founded the PhD for Lucian Stanculescu with Raphaelle Chaine (LIRIS) and Marie-Paule Cani.

Scenoptique

(2012-2014)
In October 2011, we started a collaboration with Theatre des Celestins in Lyon on the topic of interactive editing of rehearsals. This research program is funded by the Region Rhone Alpes as part of their CIBLE project, with a budget for a doctoral thesis (Vineet Gandhi) and three large sensor video cameras. Theatre des Celestins is interested in novel tools for capturing, editing and browsing video recordings of their rehearsals, with applications in reviewing and simulating staging decisions. We are interested in building such tools as a direct application and test of our computational model of film editing, and also for building the world’s first publicly available video resource on the creative process of theatre rehearsal. Using state-of-the-art video analysis methods, this corpus is expected to be useful in our future work on procedural animation of virtual actors and narrative design. The corpus is also expected to be shared with the LEAR team as a test bed for video-based action recognition.

PERSYVAL

We received a doctoral grant from LABEX PERSYVAL, as part of the research program on authoring augmented reality (AAR) for PhD student Adela Barbelescu. Her thesis is entitled directing virtual actors by imitation and mutual interaction – technological and cognitive challenges. Her advisors are Rémi Ronfard and Gérard Bailly (GIPSA-LAB).

ANR RepDyn

(2010-2012)

The purpose of this project is to enhance the performance of discrete elements and fluid computations, for the simulation of cracks in nuclear reactors or planes. Our task is to propose GPU implementations of particle models, as well as load balancing strategies in the context of multi-core, multi-GPU hardware. Marie Durand is doing a PhD thesis on this task.

ANR ROMMA

(2010-2013)

The aim of the project is to efficiently and robustly model very complex mechanical assemblies. We are working on the interactive computation of contacts between mechanical parts using GPU techniques. We will also investigate the Visualization of data with uncertainty, applied in the context of the project.

ANR Sohusim

(2010-2013)

This project deals with the problem of modeling and simulation of soft interactions between humans and objects. At the moment, there is no software capable of modeling the physical behavior of human soft tissues (muscles, fat, skin) in mechanical interaction with the environment. The existing software such as LifeMod or OpenSim, models muscles as links of variable length and applying a force to an articulated stiff skeleton. The management of soft tissues is not taken into account and does not constitute the main objective of this software. A first axis of this project aims at the simple modeling and simulation of a passive human manipulated by a mecatronics device with for objective the study and the systems design of patient’s manipulation with very low mobility (clinic bed). The second axis concentrates on the detailed modeling and the simulation of the interaction of an active lower limb with objects like orthesis, exoskeleton, clothes or shoes. The objective being there also to obtain a tool for design of devices in permanent contact with the human who allows determining the adequate ergonomics in terms of forms, location, materials, according to the aimed use.

FUI Dynam’it

(2012-2014)

2-year contract with two industrial partners: TeamTo (production of animated series for television) and Artefacts Studio (video games). The goal is to adapt some technologies created in SOFA, and especially the frame-based deformable objects [29], [28] to practical animation tools. This contract provides us with the funding of two engineers and one graphical artist during two years. Dynam’it is partially funded by the European Union.

ANR CHROME

(2012-2015)

Chrome is a national project funded by the French Research Agency (ANR). The project is coordinated by Julien Pettré, member of MimeTIC. Partners are: INRIA-Grenoble IMAGINE team (Remi Ronfard), Golaem SAS (Stephane Donikian), and Archivideo (Francois Gruson). The project has been launched in september 2012. The Chrome project develops new and original techniques to massively populate huge environments. The key idea is to base our approach on the crowd patch paradigm that enables populating environments from sets of pre-computed portions of crowd animation. These portions undergo specific conditions to be assembled into large scenes. The question of visual exploration of these complex scenes is also raised in the project. We develop original camera control techniques to explore the most relevant part of the animations without suffering occlusions due to the constantly moving content. A long-term goal of the project is to enable populating a large digital mockup of the whole France (Territoire 3D, provided by Archivideo). Dedicated efficient human animation techniques are required (Golaem). A strong originality of the project is to address the problem of crowded scene visualisation through the scope of virtual camera control, as task which is coordinated by Imagine team-member Rémi Ronfard.

Action 3DS

(2011-2014)

Action3DS is a national project funded by Caisse des Dépots, as part of the Investissements d’avenir research program entitled Technologies de numérisation et de valorisation des contenus culturels, scientifiques et éducatifs.
The goal of the project is the developpement of a compact professional stereoscopic camera for 3D broadcast and associated software. Rémi Ronfard is leading a work-package on real-time stereoscopic previsualization, gaze-based camera control and stereoscopic image quality.

AEN MorphoGenetics

(2012-2015)

3-year collaboration with INRIA teams Virtual Plants and Demar, as well as INRA (Agricultural research) and the Physics department of ENS Lyon. The goal is to better understand the coupling of genes and mechanical constraints in the morphogenesis (creation of shape) of plants.

PEPS SEMYO

(2012-2014)

2-year collaboration with INRIA team DEMAR (Montpellier) and Institut de Myologie (Paris) to simulate 3D models of pathological muscles, for which no standard model exist. The main idea is to use our mesh-less frame-based model to easily create mechanical models based on segmented MRI images.