Return to Research

Three-Dimensional Sensors

Depth Cameras and Associated Computer Vision Methods

Radu Horaud (INRIA), Miles Hansard (QMUL), and Georgios Evangelidis (DAQRI)

 

tof-cover-icon

The emergence of three-dimensional sensors, e.g., Microsoft Kinect v1 and v2, Asus Xtion Pro Live (structered-light sensors), Mesa Imaging SR4000, or Velodyne HDL-64 laser range finder, to cite just a few, have introduced a revolution in the way many image processing and computer vision research topics have been traditionally addressed, namely using either stereoscopic systems based on color (2D) camera setups or complex scanners. These new sensors capture a depth image (a depth value at each pixel) at 10-30 frames per second. This opens a whole new and wide range of real-time applications in a variety of domains such as robot perception, human-robot interaction, multimodal interfaces, computer graphics, computer entertainment, augmented reality, etc.

The objective of this lecture series is to familiarize Master & PhD students, reseaerchers and engineers with these novel technologies as well as with the methods and algorithms needed to process the depth data. Additionally, the lectures address the issue of how to combine 3D sensors with high-definition color cameras in order to obtain very rich (3D + RGB) representations.

Our book is available for download or it can be purchased from Springer.

 

 

 

 

 

Content Material
Lecture #1: Introduction to 3D sensors. History, physical and geometric principles. Horaud_3DS_1.pdf
Lecture #2: Projected-light cameras Horaud_3DS_2.pdf
Lecture #3: Time of flight cameras I (continuous wave) Horaud_3DS_3.pdf
Lecture #4: Time of flight cameras II (pulse light) Horaud_3DS_4.pdf
Lecture #5: Depth-image processing and point-cloud processing Horaud_3DS_5.pdf
Lecture #6: Point-cloud registration methods Horaud_3DS_6.pdf

 

Online resources

Publications

G. Evangelidis and R. Horaud. Joint Alignment of Multiple Point Sets with Batch and Incremental Expectation-Maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.  https://arxiv.org/abs/1609.01466.

R.Horaud, G.Evangelidis, M. Hansard and C. Ménier (October 2015). An Overview of Range Scanners and Depth Cameras Based on Time-of-Flight Technologies. Machine Vision and Applications. 27 (7), 1005-102, 2016.

G.D.Evangelidis, M. Hansard, and R. Horaud. Fusion of Range and Stereo Data for High-Resolution Scene-Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37 (11), 2178 – 2192, 2015.

G. Evangelidis, D. Kounades-Bastian, R. Horaud, and E. PsarakisA Generative Model for the Joint Registration of Multiple Point Sets. European Conference on Computer Vision (ECCV’14), Zurich, Switzerland, September 2014.

M. Hansard, G. Evangelidis, and R. Horaud. Cross-calibration of Time-of-flight and Colour Cameras. Computer Vision and Image Understanding, special issue on Camera Networks, 134, 105-115, 2015.

M. Hansard, R. Horaud, M. Amat, and G. Evangelidis. Automatic Detection of Calibration Grids in Time-of-Flight ImagesComputer Vision and Image Understanding, 121, 108-118, 2014. 

M. Hansard, S. Lee, O. Choi, R. Horaud. Time of Flight Cameras: Principles, Methods, and Applications.  Oct. 2012, Springer Briefs in Computer Science.