icra18-tutorial-VBRC

ICRA 2018 Tutorial on Vision-based Robot Control

Monday, May 21, afternoon (13:30 – 17h)

Brisbane Convention & Exhibition Centre, Room M3

As for humans and most animals, vision is a crucial sense for a robot to interact within its environment. Vision-based robot motion control, also named visual servoing, is a general approach to close the perception-action loop. It has given rise to an incredible amount of research and successful applications from the creation of the fields of robotics and computer vision several decades ago. The aim of the tutorial is to provide a comprehensive state of the art on the basic concepts, methodologies, and applications, as well as to present the recent spectacular results obtained from deep learning.

Four lectures will be presented by renowned experts, including time provided for questions and discussion, the last one more related to application and implementation issues.

Speakers

François Chaumette
Inria
Robert Mahony
ANU
Pieter Abbeel
UC Berkeley
Fabien Spindler
Inria

Program – Monday, May 21, afternoon – Room M3

13:30 – 13:35 Introduction
13:35 – 14:15 Geometric and photometric vision-based robot control: modeling approach
François Chaumette, Inria, Rennes
14:15 – 15:00 Vision-based control of dynamic systems: application in aerial robotics
Rob Mahony, ANU, Canberra
15:00 – 15:30 Afternoon Tea
15:30 – 16:15 Vision-based robot control with deep learning
Pieter Abbeel, UC Berkeley
16:15 – 17:00 Vision-based robot control with ViSP
Fabien Spindler, Inria, Rennes

Lectures

    • Geometric and photometric vision-based robot control: modeling approach by François Chaumette, Inria, Rennes

Vision-based robot control is a general methodology that consists in controlling the motions of a robot in closed loop with respect to visual data. The lecture will describe the different modeling steps necessary to design kinematics control schemes and a panel of applications showing the large class of robotics tasks that can be accomplished using this methodology. In a first part, the traditional approach based on geometric visual features, such as image points, image moments, or camera-object pose will be described. The more recent dense approach that directly uses the image content without any image tracking nor matching process will be also considered, providing a link to CNN modern methods that use the same inputs.

    • Vision-based control of dynamic systems: application in aerial robotics by Rob Mahony, ANU, Canberra

Classical image based visual servo (IBVS) control was formulated and developed for rigid robotic manipulators and uses only the forward kinematics. Extending this to dynamic systems proved to be a challenging step and was only effectively resolved 15 years after the first IBVS controllers were proposed. Extending it further to underactuated dynamic systems, such as aerial robotic systems is even more challenging. In this talk I will cover the now established approach for visual servo control for Euler-Lagrange and Port-Hamiltonian dynamic systems and explain the role of the interaction matrix as an energy transformer. I will then discuss some of the issues associated with underactuated systems in the context of servo control of aerial vehicles and present some solutions to specific problems.

    • Vision-based robot control with deep learning by Pieter Abbeel, UC Berkeley

Reinforcement learning and imitation learning have seen success in many domains, including autonomous helicopter flight, Atari, simulated locomotion, Go, robotic manipulation. However, sample complexity of these methods remains very high. With a focus on vision-based control, in this talk I will present several ideas towards reducing sample complexity: (i) Hindsight Experience Replay, which infuses learning signal into (traditionally) zero-reward runs, and is compatible with existing off-policy algorithms; (ii) Some recent advances in Model-based Reinforcement Learning, which achieve significant sample complexity gain over the more widely studied model-free methods; (iii) Meta-Reinforcement Learning, which can significantly reduce sample complexity by building off other skills acquired in the past; (iv) Domain Randomization, a simple idea that can often enable training fully in simulation, yet still recover policiesthat perform well in the real world.

ViSP developed since 2004 at Inria is an open-source software cross-platform dedicated to visual-tracking and visual-servoing applications. After a brief overview highlighting the recent changes introduced in ViSP, the lecture will first focus on real-time vision-based detection and tracking algorithms that could be used to control a robot. Then I will focus on the visual-servoing controller part that will show how to consider the outputs of the detection and tracking algorithms to control a robot considering a static or a mobile target. The lecture will be illustrated by use cases, a set of sample codes and live demonstrations of markerless model-based tracking using a RGBD camera, visual servoing in simulation and on a mBot Ranger educational robot kit equipped with a Raspberry Pi.

 

Comments are closed.