Internship « Active sensing control for robotic systems »

Désolé, cet article est seulement disponible en anglais américain. Pour le confort de l’utilisateur, le contenu est affiché ci-dessous dans une autre langue. Vous pouvez cliquer le lien pour changer de langue active.

Teams: Rainbow IRISA/Inria Rennes, Centro di Ricerca “E. Piaggio”, and Maynooth University

Scientific contact: prg@irisa.fr

Duration: 6 months 

Project site: team.inria.fr/rainbow

How to apply:  Interested candidates are requested to send an email to  prg@irisa.fr and claudio.pacchierotti@irisa.fr, including their CV and latest Transcript of Records (with ranking, if available)


Environment

This project is part of an international collaboration between the CNRS/IRISA research center in Rennes, France (with Marco Aggravi, Claudio Pacchierotti and Paolo Robuffo Giordano), the Centro di Ricerca “E. Piaggio” at the University of Pisa in Pisa, Italy (with Paolo Salaris), and the Maynooth University in Dublin, Ireland (with Marco Cognetti). 

The student can work in one of these research centers, according to the chosen topic and the advisors’ availability in the selected periods.

Fig 1: The UAV should move from the initial point (bottom left) to the final one (top center). In the left figure, the UAV chooses a path where there are no landmarks. For this reason, it is unable to localize and the final covariance of an employed observer will increase over time. In the right figure, the UAV follows a path that maximizes the information coming from the robot sensor (in this case, a down-looking camera), minimizing the estimation uncertainty of the employed observer. On the right of each figure, the camera view of each UAVs is shown.

 

Research overview

Robotics was defined as the intelligent connection between perception and action: a robot indeed needs to move for achieving a task and its choice of next actions depends on its internal knowledge about the “world” (including its internal state, model parameters, disturbances, map of the surrounding etc). Any robot needs hence to reason about the expectation given by the next action (e.g. planned for achieving a desired task) in terms of acquired information (via the on-board sensors) about the “world” in order to select the best one [1]. Indeed, onboard sensors can typically only provide partial information about the world and, therefore, some level of reconstruction/inference/estimation is needed online for recovering at best any information not directly available from the raw sensor readings. 

One of the aspects that makes the action-perception loop scientifically very interesting and challenging is the tight coupling between action and perception performance (in particular for nonlinear systems): to successfully realize a task, the robot requires an accurate knowledge of the “world” model instrumental for the task itself. However, the chosen actions can also have a strong influence on the accuracy of the estimated “world” model, with some actions more informative than others (see Fig. 1 for an illustrative example).

This seems to hold also for humans: by incorporating state-dependent sensory feedback, in [2,3] authors show that the optimal solution incorporates active sensing and is no longer a pure feedback process (mainly devoted to accomplish a given task) but includes a significant feedforward component (mainly devoted to reduce the negative effects of noise in degrading the collected information).

Active sensing control or active perception strategies [1,4,5] have been developed in the last decades of robotics research in terms of optimization problems with the aim of determining robot’s actions that maximise the information collected via the on-board sensors along the planned trajectory/path. A measure of this information can be derived by suitable metrics (often coming from observability/reconstructability theory) that in turns, it is strictly related to the reconstruction/inference/estimation performances of the employed observer.

 

Specific research directions

In the above described framework, there are some interesting and challenging research directions to be explored:

1)    Online active sensing control in dynamic and unstructured environments: the aim of this project is to develop active sensing control strategies for a grounded mobile robot in a dynamic environment. The grounded vehicle has to localise w.r.t. moving landmarks, here represented by a group of aerial vehicles (UAVs). To allow absolute localization of the ground vehicle, one of the UAV has global localization capabilities (e.g., it is equipped with a GPS). The ground robot may be autonomous or guided by a human with the aim of accomplishing a high level task that heavily depends on the accurateness and precision with which the ground robot is located (e.g. activating an electric switch in a plant). The robot needs of course to cooperate with the UAVs for maintaining them inside its on-board sensors’ range, and collect as much information as possible about its location w.r.t. the UAVs and eventually communicate these needs to the operator. 

2)    Online active sensing control for fast environment reconstruction: the aim of this project is to develop a methodology for fast concurrently active exploration and environment reconstruction. The algorithm has to automatically select which are the best landmarks to take into account for localization purposes, determines the trajectories that minimise the uncertainty about both the state of the robot and the position of landmarks and defines new exploration directions (to be done autonomously or to be used as suggestion to the human operator).

3)    Online task-aware active sensing control. The aim of this project is to orient the maximisation of the collected information to the task to be achieved by the robot (or the human operator that is guiding it). Indeed, if the robot has a task to perform (e.g., grasping an object), it is more interested in minimizing the uncertainty at the task level (e.g. accurately determine the position and orientation of the hand w.r.t. the object) rather than at the state space level (which may also include the absolute position of the robot w.r.t. the environment, of the whole arms etc.). The demonstrator of the developed methodology will be an aerial vehicle that carries a small manipulator with a terminal gripper able to hook at some pivot points, and needing to perform a maneuver from an initial hooked configuration to a final one while passing through a free-flight phase. Of course, to maximise the probability that the task of hooking an anchor point is fulfilled, the robot needs to minimize the uncertainty about the task itself.

4)    Shared control active perception: automatic authority switch between humans and robots: the aim of this project is to develop an algorithm that automatically switches the autonomy between the operator and the robot w.r.t. a given metric related to the information coming from sensors (e.g., the covariance matrix of the employed observer). In this way, the robot will follow the commands of the active perception framework when the localization is poor, avoiding potential problems (e.g., the operator may guide the real robot in the wrong position); on the other side, when the localization is good enough, the robot will be moved according to the operator’s will. Between these two extremes, the robot will move according to a smooth weighting between the active perception and the operator commands. An analysis on how this weighting mechanism can be performed in a priority-task approach has to be performed, as well as validation on real robots. 

 

These projects will capitalize on previous results in [6], where an online optimal active perception strategy is proposed in order to maximize the information collected by a mobile robot through the available sensor readings along the planned trajectory. In other words, the goal was to generate an online trajectory that minimizes the maximum state estimation uncertainty provided by an employed observer (an EKF in [7]). 

 

The framework in [6] was extended then in [7], where the active perception capabilities are combined with the high-level skills of a human operator in accomplishing complex tasks in a shared control framework. The interested student can have a look at Figure 1 for an illustrative example and to video1, video2 and video3 for some results.

 

References

[1] R. Bajcsy, Y. Aloimonos, and J. K. Tsotsos, “Revisiting active perception”. Autonomous Robots, vol. 42, no. 2, pp. 177-196, Feb. 2018 

[2] K. P. Kording and D. M. Wolpert, “Bayesian decision theory in sensorimotor control,” Trends in Cognitive Sciences, vol. 10, no. 7, pp. 319– 326, 2006, special issue: Probabilistic models of cognition.

[3] S.-H. Yeo, D. W. Franklin, and D. M. Wolpert, “When Optimal Feedback Control Is Not Enough: Feedforward Strategies Are Required for Optimal Control with Active Sensing,” PLoS computational biology, vol. 12, no. 12, 2016. 

[4] Chen, S., Li, Y., & Kwok, N. M.. “Active vision in robotic systems: A survey of recent developments”. The International Journal of Robotics Research, 30(11), 1343–1377, 2011.  

[5] Seminara L, Gastaldo P, Watt SJ, Valyear KF, Zuher F, Mastrogiovanni F. Active Haptic Perception in Robots: A Review. Front Neurorobot. 2019;13:53, Jul 17, 2019.

[6] P. Salaris, M. Cognetti, R. Spica, P. Robuffo Giordano. “Online Optimal Perception-Aware Trajectory Generation”. IEEE Trans. on Robotics, 35(6):1307-3122, Sept. 2019

[7] M. Cognetti, M. Aggravi, C. Pacchierotti, P. Salaris and P. Robuffo Giordano, « Perception-Aware Human-Assisted Navigation of Mobile Robots on Persistent Trajectories ». IEEE Robotics and Automation Letters, 2020

Les commentaires sont clos.