As for humans and most animals, vision is a crucial sense for a robot to interact within its environment. Vision for robotics has given rise to an incredible amount of research and successful applications from the creation of the fields of robotics and computer vision several decades ago.
The aim of this tutorial is to provide a comprehensive state of the art on the basic concepts, methodologies and applications. It will thus be devoted to the modeling of visual sensors and underlying geometry, object detection and recognition, visual tracking and 3D localization, and visual servoing, closing the loop from perception to action.
Note that visual SLAM, which is an important component of vision for robotics, for exploration and navigation typically, will not be addressed in this tutorial but in the afternoon tutorial devoted to SLAM. The interested audience is invited to follow these two tutorials to get a global overview of robot vision.
Lecture 3 (Eric Marchand): Visual Tracking
Visual tracking is a key issue in the development of vision-based robotics tasks. Once detected and recognized, objects have to be tracked and localized in the images stream.
Beginning with the tracking of elementary geometrical features (points, lines,…), we will consider the case where the model of tracked objects are fully known (model-based tracking) along with the case where less information but image intensity and basic geometrical constraints are available (template tracking or KLT-like method).
Tracking being a spatio-temporal process, prediction and filtering (e.g., Kalman/particle filters) are useful process for improving visual tracking results and robustness.
The results of the tracking algorithms may then be considered within a visual servoing control scheme.