Grégoire Nieto

I am passionate about multimedia, from digital photography to cinema to video games. After an electrical engineering curriculum at Supelec (Ecole superieure d’electricite), I chose to develop my skills in computer graphics at University College London. I graduated in 2014, to start a PhD in computational photography under the supervision of Frederic Devernay. and James Crowley.

I am currently a member of IMAGINE team at INRIA Rhone-Alpes (Grenoble, France), conducting my research in light fields and Image-Based Rendering (IBR). For a summary of my PhD, please read the abstract below.

All my code is available on GitHub.




Light Field Remote Vision


Light fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.

Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an Image-Based Rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.

A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.

However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.

Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.

Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.

The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets.


Image-Based Rendering, Computational Photography, 3D Reconstruction, Light Field, Plenoptic Imaging.


Available on GitHub.