PhD Position in Visual- and Haptic-based Control of Aerial Robots for the Manipulation of Articulated Objects

Open PhD Position in

Visual- and Haptic-based Control of Aerial Robots for the Manipulation of Articulated Objects

Image credits to [3] and [4].


Hired by the Rainbow team at IRISA/Inria Rennes, France 

Advised by: Paolo Robuffo Giordano and Marco Tognon (Rainbow team)

How to apply:  Interested candidates are requested to apply via this form. The position will remain open until a satisfactory candidate is found.


Context

Aerial robots (commonly called “drones”) are nowadays extensively used to see the environment in applications like agriculture, mapping, etc. But, if aerial robots were also able to effectively manipulate the environment, the application domains could be further extended toward new areas like contact-based inspection, assembly and construction, and so on. The research community has previously focused on the design and control of aerial manipulators [1]. This opened the door to new applications, e.g., contact-based inspection [2]. However, current methodologies are still limited to very simple interaction tasks, involving limited contact behaviors with static and rigid surfaces (e.g., touching a flat wall with a stick attached to the robot) and in very controlled environments.

Because this gap, the goal of this project is to enhance aerial robotic physical interaction capabilities of aerial manipulators by considering:

  • manipulations tasks of articulated and dynamic objects (e.g., opening doors and valves, assembling structures, etc), possibly extended to physical human-aerial robot collaboration;
  • real application conditions characterized by uncertainties due to system modeling errors, noisy and imprecise measurements, imprecise actuation models, partially unknown environments, either static or highly dynamic like in the case of human-robot interaction.

Envisaged Activities

M. Tognon was among the first to explore the challenging problem of interacting with articulated and movable objects, e.g., to push carts [3], and to open doors and valves [4]. Although the preliminary results are very impressive and encouraging, they have also been obtained in extremely controlled conditions. The robustness against modeling errors is still a big open problem.

This project will build upon previous works extending them to improve the reliability and to use onboard sensors only. We will investigate how visual sensors and visual-servoing concepts can improve robustness. However, conscious of the limitation in terms of accuracy, we will investigate a new sensing modality: the sense of touch. We will study how skin-like sensors can be used to enhance the robot understanding of the environment and the physical interaction. Machine learning and Reinforcement learning methods will be used to design a policy that from the sensor readings directly compute the best robot action to perform the task.

The newly designed methods will be validated with real experiments. We will first address the problem of opening a door, which will be then extended to the manipulation of other articulated objects, eventually extended to deformable ones, and to eve physical interaction with humans.

Skills/Requirements

  • M.Sc. or Ph.D. degree in computer science, robotics, engineering, applied mathematics (or related fields)
  • Good experience in C/C++ , ROS, Matlab/Simulink
  • Scientific curiosity, large autonomy and ability to work independently

The following experiences are considered as a plus

  • Experience with cameras and visual perception methods
  • Experience with optimization methods and corresponding libraries
  • Experience with Machine learning for vision and Reinforcement learning for control

Conditions

The Ph.D. position is full-time for 3 years (standard duration in France). The position will be paid according to the French salary regulations for PhD students.

How to apply

Interested candidates are requested to apply via this form.
The position will remain open until a satisfactory candidate is found

References

    1. A. Ollero, M. Tognon, A. Suarez, D. J. Lee, and A. Franchi. Past, present, and future of aerial robotic manipulators. IEEE Trans. on Robotics, 2021
    2. M. Tognon, Tello-Chavez, H. A., Gasparin, E., Sablé, Q., Bicego, D., Mallet, A., Lany, M., Santi, G., Revaz, B., Cortés, J., and Franchi, A., “A Truly-Redundant Aerial Manipulator System With Application to Push-and-Slide Inspection in Industrial Plants”, IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1846-1851, 2019.
    3. F. Benzi, M. Brunner, M. Tognon, C. Secchi, and R. Siegwart. Adaptive tank-based control for aerial physical interaction with uncertain dynamic environments using energy-task estimation. IEEE Robotics and Automation Letters, 7(4) :9129–9136, 2022.
    4. Maximilian Brunner, Giuseppe Rizzi, Matthias Studiger, Roland Siegwart, and Tognon, Marco. A planning-and-control framework for aerial manipulation of articulated objects. IEEE Robotics and Automation Letters, 7(4) :10689–10696, 2022.

Comments are closed.