Grégoire Nieto

I am passionate about multimedia, from digital photography to cinema to video games. After an electrical engineering curriculum at Supelec (Ecole superieure d’electricite), I chose to develop my skills in computer graphics at University College London. I graduated in 2014, to start a PhD in computational photography under the supervision of Frederic Devernay. and James Crowley.

I am currently a member of IMAGINE team at INRIA Rhone-Alpes (Grenoble, France), conducting my research in light fields and Image-Based Rendering (IBR). For a summary of my PhD, please read the abstract below.

All my code is available on GitHub.

 

 

Publications

2017

Linearizing the Plenoptic Space
Grégoire Nieto, Frédéric Devernay, James Crowley
Light Fields for Computer Vision, Jul 2017, Honolulu, United States. 2017, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. <http://lightfield-analysis.net/LF4CV/>
https://hal.inria.fr/hal-01572479/file/nieto-lf4cv2017.pdfBibTex

2016

Variational Image-Based Rendering with Gradient Constraints
Grégoire Nieto, Frédéric Devernay, James Crowley
IC3D – 2016 International Conference on 3D Imaging, Dec 2016, Liège, Belgium. <10.1109/IC3D.2016.7823449>
https://hal.archives-ouvertes.fr/hal-01402528/file/ic3d2016.pdfBibTex

Rendu basé image avec contraintes sur les gradients
Grégoire Nieto, Frédéric Devernay, James Crowley
Reconnaissance des Formes et l’Intelligence Artificielle, RFIA 2016, Jun 2016, Clermont-Ferrand, France
https://hal.archives-ouvertes.fr/hal-01393942/file/R3D02.pdfBibTex

2015

Placement optimal de caméras contraintes pour la synthèse de nouvelles vues
Grégoire Nieto, Frédéric Devernay, James L. Crowley
Journées francophones des jeunes chercheurs en vision par ordinateur, Jun 2015, Amiens, France. pp.2
https://hal.archives-ouvertes.fr/hal-01161825/file/orasis2015_nieto_gre.pdfBibTex

Light Field Remote Vision

Abstract

Light fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.

Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an Image-Based Rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.

A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.

However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.

Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.

Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.

The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets.

Keywords

Image-Based Rendering, Computational Photography, 3D Reconstruction, Light Field, Plenoptic Imaging.

Code

Available on GitHub.