Return to Peter Sturm

Peter Sturm, Research

Research topics. Click on the topics to see the associated publications.

  • Urban planning and related topics Our work mainly aims at contributing to sustainable development in general, and sustainable development of urban regions, in particular. This is work that started recently. We started investigating numerical algorithms for instantiating and validating integrated models for land use and transportation. Other works concern the study of urban sprawl, land cover changes, and material flows.
  • 3D scene modelling Our main motivations are twofold: (i) study representations and develop algorithms suitable for obtaining good quality 3D models. (ii) do so by starting to reason from first principles. As for the latter aspect, we started our work by re-visiting what photoconsistency measure should be used for multi-view stereo and other 3D modeling problems: one aims at estimating a 3D model (plus, possibly, appearance or even lighting) whose renderings from the viewpoints of the input images, results in images that are as close to the acquired ones, as possible. This is easy to state, but the corresponding cost function is actually rather hard to optimize. One of our main contributions is to derive the gradient of this type of cost function, see Gargallo et al. 2007 and Delaunoy et al. 2008 below.
  • Joint modelling of geometry, reflectance and/or illumination Images of objects are the result of an interplay between object shape and appearance, lighting, and camera properties (geometry and radiometry). Ideally, one would like to recover all of these from images, but this is obviously highly ill-posed in general. We were interested in exploring several directions in the continuum of this geometry – appearance – lighting problem. One direction has been the joint estimation of shape and appearance for non-Lambertian surfaces, using images acquired under controlled lighting. Other directions have been the recovery of lighting conditions or radiometric calibration, from unstructured image collections. I think that there is lots of room for future work on the geometry – appearance – lighting continuum.
  • Reconstruction and detection of specular or refractive surfaces Related to the previous topic, but less appearance and more geometry-oriented. It started with the reconstruction of perfectly specular objects (mirrors) and lead to working on the detection and the 3D reconstruction of semi-transparent surfaces (reflective plus refractive).
  • Omnidirectional vision and generic camera models Our main motivation was to derive theories and algorithms for 3D vision, that are applicable to whatever camera, be it a “regular” camera, a catadioptric one, a fish-eye device, etc. This was achieved for the problems of camera calibration, pose estimation, motion estimation, 3D reconstruction, and also for self-calibration. Also, check out our monograph on all kinds of camera models that have been proposed in the literature: Sturm et al. 2011, with more than 500 references.
  • 3D reconstruction using geometric constraints Older work that explored two main directions: (i) interactive 3D modeling, where the user provides simple geometric constraints (parallelism of lines, coplanarity of points, etc.) that may even enable 3D reconstruction from a single image. (ii) use such constraints to increase accuracy in multi-view 3D reconstruction.
  • Camera calibration Our work on calibration (of perspective cameras) includes the first peer-reviewed publication of the popular plane-based calibration approach (that is for example contained in OpenCV), as well as contributions to the calibration of multi-camera systems and several special cases (calibration from images of circles, of one-dimensional objects, …).
  • Self-calibration and critical motions Older work, most of it done during and shortly after my PhD. The main contribution is a theoretical study of degeneracies of the self-calibration problem: in a nutshell, it turned out that the most “natural” camera motions (e.g. turning around an object along a circle) are “worst” for self-calibration feasibility or accuracy. This work explains why self-calibration was originally deemed highly unstable and allows to define guidelines for image acquisitions that are favorable for self-calibration (interestingly, these guidelines are commonplace in photogrammetry, where they seem to have been derived more empirically). Other works on self-calibration includes algorithms for self-calibration of cameras by taking images of a planar object, with otherwise unknown structure.
  • Structure from motion for lines A rather complete treatment of structure from motion (motion estimation, triangulation, bundle adjustment) for line features, as opposed to the more well-studied point features.
  • Triangulation of points Our main contribution here is the first globally optimal method for triangulating 3D points from correspondences in two images.
  • Projective reconstruction We have extended the classical Tomasi-Kanade factorization approach for jointly estimating object shape and camera motion, from the affine / orthographic camera model to the perspective one.
  • Object detection and tracking I have also worked a little on object detection and tracking, among others on tracking using particle filters.
  • Image registration and deblurring Our main set of works concerns the problem of registering multispectral pushbroom images acquired by spaceborne cameras, for image mosaicing as well as high-accuracy estimation of a satellite’s orientation along time.
  • Projector-camera systems Our works concerns the (self-)calibration of projector-camera systems and the usage of these for one-shot (i.e. instantaneous) object scanning.
  • Model selection for two view geometry During my post-doc, Steve Maybank and I did some trials on rigorously applying the MDL principle for model selection in two view geometry (e.g. selecting between a fundamental matrix and a homography, given a set of point matches).
  • Other structure from motion work A mix of works, ranging from pose estimation to structure from motion for dynamic scenes to the study of the impact of an inaccurate camera calibration on the accuracy of motion estimation, etc. Check out my article and associated talk, that summarize some current findings of a deep literature study on the history of 3D vision (more to come, one day…): Sturm 2011.
  • PhD and Habilitation theses

Urban planning and related topics

3D scene modelling

Joint modelling of geometry, reflectance and/or illumination

Reconstruction and detection of specular or refractive surfaces

Omnidirectional vision and generic camera models

3D reconstruction using geometric constraints

Camera calibration

Self-calibration and critical motions

Structure from motion for lines

Triangulation of points
  • The Geometric Error for Homographies Ondra Chum, Tomáš Pajdla, Peter Sturm. CVIU – Computer Vision and Image Understanding, 97(1), 86-102, 2005.
  • On Geometric Error for Homographies Ondra Chum, Tomáš Pajdla, Peter Sturm. Technical Report CTU-CMP-20, Czech Technical University, Prague, 2003.
  • Triangulation Richard Hartley, Peter Sturm. CVIU – Computer Vision and Image Understanding, 68(2), 146-157, 1997.
  • Triangulation Richard Hartley, Peter Sturm. CAIP – 6th International Conference on Computer Analysis of Images and Patterns, Prague, Czech Republic, 190-197, 1995.
  • Triangulation Richard Hartley, Peter Sturm. ARPA Image Understanding Workshop, Monterey, California, USA, 957-966, 1994.

Projective reconstruction

Object detection and tracking

Image registration and deblurring

Projector-camera systems

Model selection for two view geometry

Other structure from motion work

In Memoriam Roger Mohr
  • In Memoriam Roger Mohr K. Tombre, L. Quan, R. Horaud, P. Gros, C. Schmid, P. Sturm. 1024 – Bulletin de la Société Informatique de France, 11, 107-114, 2017.
PhD and Habilitation theses