Ph.D. position: Video-based dynamic garment representation and synthesis


The Ph.D. position is part of a joint laboratory between Interdigital, a leading technology and research company, and Inria, the French national institute of computer science and automation. In particular, the Ph.D. is shared between an Interdigital team in Rennes, Inria Morpheo team in Grenoble, and Inria Mimetic team in Rennes.


It has recently become possible to reconstruct sequences of temporally coherent 3D models of humans in clothing from input videos, which subsequently allows to synthesize new animations, e.g. [1,2]. Such state-of-the-art approaches typically learn a model of clothing on top of a parametric body model and are hence limited to relatively tight clothing. Our prior work allows modeling more diverse clothing using a fuzzy correspondence of the garments and the underlying parametric body, at the cost of losing fine-scale geometric detail in the model [3]. An orthogonal line of works models clothing using garment templates, and learns the garment’s dynamic behavior during deformation of the person wearing the garment e.g. [4]. This strategy allows modeling detailed complex wide and multi-layered garments, and can be used to synthesize realistic dynamic videos [5].

This Ph.D. is concerned with learning efficient garment representations from a given input video. In particular, the work will focus on two aspects. First, we will study how to combine advantages of existing lines of work to learn a garment representation that allows for wide and multi-layered clothing without the need for a detailed garment template at inference time. The resulting representation should generalize to a large set of different garment styles and materials, and may hence benefit from physics-inspired models such as [4,6]. The temporal consistency would also benefit from the estimation of dense correspondences between clothed body parts, as proposed recently in [7]. Second, we will use the resulting representation to synthesize new animations and eventually change the appearance of the garments. A possible aspect to consider for synthesis and transmission of these models over the network is the sparsity of the models, or the compression capability of the extracted latent representation. Evaluating these animations is not straight forward and different evaluation metrics will be considered for this task.


  1. Dynamic Surface Function Networks for Clothed Human Bodies. Burov, Niessner, Thiess. International Conference on Computer Vision, 2021.
  2. ICON: Implicit Clothed humans Obtained from Normals. Xiu, Yang, Tzionas, Black. CVPR, 2022 (
  3. Analyzing Clothing Layer Deformation Statistics of 3D Human Motions. Yang, Franco, Hétroy-Wheeler, Wuhrer. European Conference on Computer Vision, 2018.
  4. Self-supervised neural dynamic garments. Santesteban, Otaduy, Casas. Conference on Computer Vision and Pattern Recognition, 2022.
  5. Dynamic Neural Garments. Zhang, Ceylan, Wang, Mitra. SIGGRAPH Asia 2021.
  6. Learning-based cloth material recovery from video. Yang, Liang, Lin. Conference on Computer Vision and Pattern Recognition, 2017.
  7. BodyMap: Learning Full-Body Dense Correspondence Map. A. Ianina et al., CVPR 2022 (


The Ph.Ds. will start in October 2022 and their duration will be 3 years. The Ph.D. will be supervised by Pierre Héllier (Interdigital Rennes), Bharath Damodaran (Interdigital Rennes), Adnane Boukhayma (Inria Rennes), and Stefanie Wuhrer (Inria Grenoble).


The Ph.D. will take place at Inria Grenoble with planned regular research visits in Rennes.

Candidate profiles

  • Master in Computer Science or Applied Mathematics.
  • Solid programming skills, e.g. python and/or C++.
  • Solid mathematical knowledge in geometry, linear algebra and statistics.
  • Experience with deep learning and shape modeling is a plus.
  • Experience with physics-based simulation is a plus.
  • Good English level. French is not required.


Only full applications including

  • CV
  • letter of motivation
  • transcript of grades
  • names and contact information of two people willing to provide reference letters (typically a professor who can evaluate the candidate)

are considered. To apply, follow this link. For more information, contact

Comments are closed.