Photometric visual servoing

Contact : Christophe Collewet, Eric Marchand
Last modification : September, 2008                    

Overview: Visual servoing from luminance information

This demo shows a new way to achieve robotic tasks by visual servoing. Instead of using geometric features (points, straight lines, pose, homography, etc.) as it is usually done, we use directly the luminance of all pixels in the image. Experimental results validate the proposed approach and show its robustness regarding to approximated depths, non Lambertian objects and partial occlusions.

Reference

Results

For each experiment the image show the first and final images along with the error between desired image and the initial or final one. The video shows the current image and the difference between current and desired image (a perfect gray image means no error).

Large camera motion

A non Lambertian object has been used, it is a photograph covered by glass. The initial pose is also more complicated since it involves larger motion to reach the desired position. Error between initial and desired one is (20cm, 10cm, 5cm, 10deg, 11deg, 15deg). As can be seen, despite the non Lambertian object, the control law converges without any problem. The final positioning error is very low since we have dr = (-0.1mm, -0.1mm, -0.1mm, -0.01deg, -0.01deg, -0.01deg).

see the video

test
test

Very large camera motion with occultations

In this experiment along with a rotation of 30deg around the Z axis (error between initial and desired one is (20cm, 10cm, 5cm, 10deg, 11deg, 30deg)), we add a new object (that is not in the desired image) and that can be seen "as a ghost" in the final error image.

see the video

test
test

Robustness to unknown scene depth

The goal of the last experiment is to show the robustness of the control law wrt the depths. For this purpose, a non planar scene has been used as shown on the figure bellow. It shows that large errors in the depth are introduced (the height of the castle tower is around 30cm). The initial and desired poses are unchanged. Here again, the control law still converges and the positioning error is still low since we have the final positionning is dr = (0.2mm, -0.2mm,0.1mm,-0.01deg, -0.02deg, 0.01de).

testexternal view of the scene

see the video

test
test

Robustness to non textured and non lambertian scene

see the video

test
test

Tracking results

We generalize the Lambertian model for the optical flow constraint equation using a Phong model. After specialization to specific light sources position (coincident with the camera position), we apply this result to visual servoing target tracking (see CVPR 2008 paper).

| Lagadic | Map | Team | Publications | Demonstrations |
Irisa - Inria - Copyright 2009 © Lagadic Project