Presentation

VirtUs is the code name for a joint project-team from Inria and the Universities of Rennes 1 and Rennes 2, which is positioned at the interface between digital science and motion science. It is situated in the D5  scientific department of Irisa, alongside teams which work in the area of  Virtual Reality, Virtual Humans, Interactions and Robotics.

The research within the VirtUs team centres around simulation of populated virtual spaces where virtual and real humans coexist. It is focused on the following research topics:

1. NextGen Virtual Characters

Our first research focus primarily explores new methods for character animation. Animation is a key element in our immersive simulations and must meet a number of criteria specific to virtual reality applications that allow interaction with users. We are looking for high-performance real-time animation methods that deliver credible results and allow characters to be expressive and interactive, with possibilities to populate virtual scenes with many (because we seek to populate virtual environments, see Research focus 2). To this end, we focus primarily on kinematic methods (e.g., Motion Matching) based on motion capture sets, and propose methods that adapt animated movements to the fine features of a character-user interaction. We also aim to make our characters’ behaviour easily interpretable from their animations (non-verbal communication) with the goal of offering autonomous, efficient and credible virtual humans.

Figure 1: balance recovery after push [Jensen et al. 2023] (left) and avoiding expressive virtual humans in a constrained environment [Patotskaya et al. 2023] (right).
2. Populated dynamic worlds

This research explores methods for populating virtual worlds, i.e., creating a (large) number of autonomous virtual humans (capable of interacting with each other, with their environment, and with a user) who, together, occupy the space (distribution of people density, types of actions carried out in specific locations) in a realistic manner, consistent with the actual use of a similar real environment. As the previous research focus is on the detailed animation of individual virtual humans, this axis focuses more on the higher level of their simulation, i.e., their global trajectories, their modes of interactions, etc. The team is interested in two aspects of this general context: first, crowd simulation (i.e., simulating the trajectories of many virtual humans so that the crowd they form moves like real human crowds), and second, authoring tools (i.e., how to setup and control a crowd simulation so that the virtual crowd behaves as expected by the designer of a virtual world).

Figure 2: authoring tools using interaction fields [Colas et al. 2022] (left) and classification of balance recovery steps in the concert application [Chatagnon et al. 2023] (right).
3. Locomotion, pedestrian interactions, collective behaviours and non-verbal communication

In this focus, the team positions itself as a user of its own immersive simulations for research purposes on human behaviour, including in particular: pedestrian behaviour, inter-individual interactions, and the collective behaviour of human crowds. Our scientific objectives are twofold: first, to validate the use of VR in these different thematic contexts, and second, to advance knowledge in these subjects, typically beyond what would be possible through experimentation in real-world conditions alone.

Figure 3: the One-Man crowd method [Yin et al. 2023] (left) and user locomotion in crowds [Jordan et al. 2024] (right).
4. Computational cinematography: towards data-driven authoring and content creation for virtual and augmented worlds

The objective of the fourth research focus is to adopt a broader view and to explore in addition to character animation techniques, a range of key scientific challenges related to computational cinematography – a collection of research topics focussed on estimative and generative techniques related to cinematographic contents. This encompasses estimating cinematic features from film contents (composition, staging, lighting), and exploiting this extracted knowledge in generative approaches, both for the augmentation of film contents (inserting virtual objects), for the creation of new 3D contents (scene layouts, character and camera trajectories) or directly for video generation. Our expected outcome is the creation of more compelling worlds, should they be fully virtual, or mixed real/virtual, which can be exploited for the other research axes of the Virtus team. The group is well recognised internationally on this topic of computational cinematography.

Figure 4: real-time computational cinematographic editing for broadcasting of volumetric-captured events [Bourel et al. 2023] (left) and a self-supervised personalised real-time monocular face capture, SPARK [Baert et al. 2024] (right).