Francis Colas and Vincent Thomas
Autonomy of robots often requires an internal representation of the current state of both the
robot and its environment. For instance, a mobile robot aiming to go at a specific location will
often estimate its current location and the map of the place; a robotic arm trying to pick an
object will need the pose of the object (position and orientation), the description of potential
obstacles, and its own current configuration; a humanoid robot in interaction with a human
will need to know what the human is currently doing, her pose, her intention, her emotional
Standard state estimation techniques often rely on a probabilistic representation wherein a
probability distribution over the state space is recursively computed based on some
observations and a model of the evolution of the system. This is generically known as a
Bayesian filter, which has several classical instantiations according to the specificities of the
system. Typical examples cases are Hidden Markov Models for discrete state representation
and full transition matrices [Rabiner, 1989], Kalman filters for continuous states with Gaussian
distributions and linear models [Kalman, 1960], or even particle filters for a sampled
representation of the distributions [Doucet et al., 2000]. These techniques have been applied
for distinct parts of the estimation either of the state of the robot with embedded
sensors [Kubelka et al., 2015, Hitz et al., 2016], for the state of the robot with
distributed sensors [Rio et al., 2016], for the estimation of human activity [Dubois and
Charpillet, 2013] or sound sources [Nguyen et al., 2016], or even mapping the environment
[Durrant-Whyte and Bailey, 2006]. As expected various representations are suited for different
The key challenge comes when fusing information from multiple sources of different
characteristics. Classically, either you opt for a single system with the full state and all
sensors in a monolithic filter, or you choose a specific representation shared across all sub-filters
that can be treated together in a weak-fusion scheme. This works well until you
need to build a integrated representation of both the robot and the environment
based on various modalities and processes. For instance, it is important to jointly
work with discrete probabilities, Gaussian or mixture of Gaussian distributions,
and particles in order to build a representation of the environment including an
occupancy map of the obstacles, the location of sound sources and of several persons with
different activities, and the current state of the robot based on the results in the
A second challenge is to be able to integrate machine learning prediction into the
model-based filters. Indeed, as the world representation becomes more complete it becomes
more difficult to specify relevant models. A solution can be to completely resort to machine
learning, not even attempting to specify any model, but it would be better to reuse the
models we already have despite their shortcomings and use machine learning as a
The aim of this project is therefore to advance the state of the art of filtering techniques in
robotics along one of two principal dimensions.
The first objective could be to find a way to propagate information between the
different kinds of filters: how to do efficient Bayesian inference with distinct distribution
representations? An approach of this question could typically come from approximation
techniques such as sampling or moment matching.
The second objective could be to combine those model-based filtering techniques
with machine learning. Indeed the models are never complete or correct and several
techniques require approximations to become tractable. There are therefore systematic errors
that could potentially be corrected by model-less learning techniques such as deep neural
networks. This second objective requires again to be able to transfer information across
different kind of representations.
A. Doucet, N. De Freitas, K. Murphy, and S. Russel. Rao-Blackwellised particle filtering for
dynamic bayesian networks. In Proceedings of the Sixteenth conference on Uncertainty in artificial
intelligence, pages 176–183. Morgan Kaufmann Publishers Inc., June 2000.
A. Dubois and F. Charpillet. Human activities recognition with RGB-Depth camera using HMM.
In Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference
of the IEEE, pages 4666–4669. IEEE, July 2013.
G. Hitz, F. Pomerlesau, F. Colas, and R. Siegwart. State estimation for shore monitoring using
an autonomous surface vessel. In Experimental Robotics, pages 745–760. Springer International
Q. V. Nguyen, F. Colas, E. Vincent, and F. Charpillet. Localizing an intermittent and moving
sound source using a mobile robot. In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ
International Conference on, pages 1986–1991. IEEE, October 2016.
M. Rio, F. Colas, M. Andries, and F. Charpillet. Probabilistic sensor data processing for robot
localization on load-sensing floors. In Robotics and Automation (ICRA), 2016 IEEE International
Conference on, pages 4544–4550. IEEE, May 2016.