New paper to be presented at ICRA’23

 

New paper has been accepted in IEEE Int. Conf. on Robotics and Automation thanks to Samuel Felton.

This paper is related to deep learning, metric learning and visual servoing

  1. S. Felton, E. Fromont, E. Marchand. Deep metric learning for visual servoing: when pose and image meet in latent space. In IEEE Int. Conf. on Robotics and Automation, ICRA’23, London, UK, May 2023. details Hal : Hyper Archive en ligne pdf

We propose a new visual servoing method that controls a robot’s motion in a latent space. We aim to extract the best properties of two previously proposed servoing methods: we seek to obtain the accuracy of photometric methods such as Direct Visual Servoing (DVS), as well as the behavior and convergence of pose-based visual servoing (PBVS). Photometric methods suffer from limited convergence area due to a highly non-linear cost function, while PBVS requires estimating the pose of the camera which may introduce some noise and incurs a loss of accuracy.
Our approach relies on shaping (with metric learning) a latent space, in which the representations of camera poses and the embeddings of their respective images are tied together. By leveraging the multimodal aspect of this shared space, our control law minimizes the difference between latent image representations thanks to information obtained from a set of pose embeddings. Experiments in simulation and on a robot validate the strength of our approach, showing that the sought out benefits are effectively found.

This paper is a collaboration with Inria Lacodam team

Les commentaires sont clos.