sujet2018-markovcollaboration

Probabilistic models for human-robot collaboration

Author : Vincent Thomas

General informations

Supervisor Vincent Thomas Francis Colas François Charpillet
Phone number 03 54 95 85 08 03 54 95 86 30
E-mail vincent.thomas@loria.fr francis.colas@inria.fr francois.charpillet@inria.fr
Office C125 C125 C123

Context

Collaboration between humans and robots is a current high-stake research subject with numerous application areas (smart factories, therapeutic robot companion,…). The internship we propose here is linked to the “Flying co-worker” ANR accepted project (2018) whose aim is to build a collaborative flying robot to help human workers.

More precisely, the proposed subject focuses on building the behavior (or the conditionnal plan) of an autonomous robot to assist a human worker to fulfill his tasks in the best possible way (meaning, minimizing a cost to define). It is a complex issue since (1) the robot has to estimate the objective of the human worker through partial observations of his activity, (2) the robot has to make decisions based on this partial information and (3) the human behavior might also depend on the actions undertaken by the robot.

Objective

This internship aims at addressing this issue by using models from the “decision making under uncertainty” research field (Markov Decision Processes, Partially Observable Markov Decision Processes [1]) and at investigating several questions:
– how to model the human worker behavior, his objectives and the task sequence he tries to accomplish (with the help of Markov chain);
– how to use this model to infer distributions on the current objective of the human worker based on his actions (Bayesian inference, HMM [2]);
– how to decide actions to gather more information about the human task (with the help of active sensing models and algorithms [3]);
– how to help the human worker while considering uncertainties about his state and his evolving objectives (by using POMDP models).

Proposed approach

In a first step, the student will have to study Markov Models before proposing a formalization of a situation where an autonomous agent has to help a human to attain his current objective, initially unknown to the agent. For instance, the autonomous agent has to decide which tools to bring -among several available- to an isolated human worker by considering the probabilistic workflow of his activity. Then, in a second step, the internship will address the questions cited previously: first by finding a way to model the human objective [4] [5] and his adaptation to the robot actions [6] and then by proposing algorithms to build the optimal behavior of the agent based on these models [7] and on reasoning on Human Robot joint action [8].

During the internship, this work will be mostly done in simulation to investigate algorithms and their efficiencies, but, if time allows it, a simple scenario (to be built during the internship) could be tested with a real robot.

Keywords : artificial intelligence, decision making under uncertainty, Bayesian inference.

References

[1] Sigaud, O., Buffet O. (2008). Markov Decision Processes in Artificial Intelligence. Lavoisier – Hermes Science Publications 2008.

[2] Rabiner L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257-286.

[3] Araya-Lòpez, M., Buffet, O., Thomas, V. and Charpillet, F. (2010). A POMDP Extension with Belief-dependent Rewards. In Advances in Neural Information Processing Systems 23 (NIPS 2010).

[4] Fansi Tchango A., Thomas V., Buffet O., Flacher F., and Dutech A. (2014). Simultaneous Tracking and Activity Recognition (STAR) using Advanced Agent-Based Behavioral Simulations. 21st European Conference on Artificial Intelligence (ECAI 2014)

[5] Fern A., Natarajan S., Judah K., Tadepalli P. (2014) A Decision-theoretic Model of Assistance. Journal of Artificial Intelligence Research (JAIR), vol 50, 71-104.

[6] Nikolaidis, S., Hsu, D. and Srinivasa, S. (2017). Human-Robot Mutual Adaptation in Collaborative Tasks: Models and Experiments. In The International Journal of Robotics Research, 36(5-7), 618-634.

[7] Hadfield-Menell, D., Dragan, A., Abbeel, P. and Russell, S. (2016). Cooperative Inverse Reinforcement Learning. In Advances in Neural Information Processing Systems 29 (NIPS 2016)

[8] Lemaignan, S., Warnier, M., Sisbot, E.A., Clodic, A. and Alami, R. (2017). Artificial cognition for social human-robot interaction: An implementation. In Artificial Intelligence – Special Issue on AI and Robotics, 247, 45-69.

 

Comments are closed.