One of our objectives is to develop machine learning algorithms to predict the human future movement or intention so that the robot can anticipate the motion of the human in time and thus better assist them. To do so, we focus on short predictions and work on models that quantify their uncertainty and express different possible futures by proposing precise predictions while remaining aware of their limitations.
When it comes to use robots to help operator, two main challenges arise: anticipating to compensate for delays both in actuation and communication and improving the prediction to consider unknowns and contextual information such as payloads, environment and task constraints. Anticipating human behavior requires the ability to “understand the human’s intention” and predict future human behaviors to plan suitable actions. Such requirement is yet difficult to reach as physical interactions between humans and robots differ in levels of timing and dynamics. In the line of our past work, we are interested in algorithms to accurately predict the future whole-body motion of a human, using a data-driven approach: during training, a machine learning model is trained to predict the next fewseconds given the past motion.

Prediction of whole-body trajectories
The HUCEBOT team strive to find the best family of models for whole-body trajectory learning and prediction by investigating various models and building software. Developing a library of learned primitives of whole-body motions of both human motions and robot motions is crucial to generate adequate movements for robots and inform the robot policies and controllers about how a human would behave in certain situation. In the line of the European project euROBIN which develops algorithms for learning complex whole-body skills by combining human demonstrations and natural language instructions, we aim at exploring natural language instructions to revolutionize teleoperation and human-robot interaction.

Multimodality, uncertainty and diversity
In addition to considering motion data, our work incorporates contextual and multimodal information that is collected from sensors and sources. Considering the potential of the combination of language instruction and multimodal data representing the context or environment, we are ready to take on the challenge of realizing precise predictions that can be used in real-time control, scaling to whole-body movements, and computationally “light” models.
Interacting with humans in unstructured environments demands that our robots adapt creatively to unforeseen situations and new users, often falling outside the typical data distribution. While human operators play a pivotal role in adapting to these new situations, models and policies may overly specialize in specific scenarios or motions or make errors. To address this challenge, we propose two key strategies : integrating an epistemic uncertainty quantification approach into all our methods and designing algorithms to automatically design or curate diverse datasets. After largely having contributed to the creation of Quality Diversity algorithms, we now ain at refining them.