Full-day workshop on “Haptic-enabled shared control of robotic systems: a compromise between teleoperation and autonomy” – 2018 IEEE/RSJ IROS

Full-day workshop on “Haptic-enabled shared control of robotic systems: a compromise between teleoperation and autonomy” – 2018 IEEE/RSJ IROS


As teleoperation systems become more sophisticated and flexible, the environments and applications where they can be employed become less structured and predictable. This desirable evolution toward more challenging robotic tasks requires an increasing degree of training, skills, and concentration from the human operator. For this reason, researchers started to devise innovative approaches to make the control of such systems more effective and intuitive. In this respect, shared control algorithms have been investigated as one the main tools to design complex but intuitive robotic teleoperation system, helping operators in carrying out several increasingly difficult robotic applications, such as assisted vehicle navigation, surgical robotics, brain-computer interface manipulation, rehabilitation. This approach makes possible to share the available degrees of freedom of the robotic system between the operator and an autonomous controller. The human operator is in charge of imparting high level, intuitive goals to the robotic system; while the autonomous controller translates them into inputs the robotic system can understand. How to implement such division of roles between the human operator and the autonomous controller highly depends on the task, robotic system, and application. Haptic feedback and guidance have been shown to play a significant and promising role in shared control applications. For example, haptic cues can provide the user with information about what the autonomous controller is doing or is planning to do; or haptic force can be used to gradually limit the degrees of freedom available to the human operator, according to the difficulty of the task or the experience of the user. The dynamic nature of haptic guidance enables us to design very flexible robotic system, which can easily and rapidly change the division of roles between the user and autonomous controller.

This workshop aims at providing a scientific space to discuss this promising field, as well as fostering a lively discussion about the theoretical, technological, and translational aspects of this approach. The workshop brings together highly-renowned scientists and engineers from the fields of robotics, haptics, and control systems. Indeed, we believe that there is a strong need of cross-fertilization between scientists working on shared control from the above mentioned fields. This workshop will bring together people working on industrial manipulators, medical robotics, underwater robotics, human-robot interaction, active constraints, multi robot, mobile robotics, perception — all using very similar tools to control their robotic systems, but with very little chance of meeting each other and foster collaborations.


Call for papers

We are currently looking for contributions to the workshop.
The interested people should send to marco.cognetti@irisa.fr an abstract of 1 or 2 pages. All the accepted papers will have a relevant role in the workshop.
Topics of interests: shared control algorithms, haptic feedback and guidance, active constrains and virtual fixtures, human-robot interaction, applications in robotic teleoperation, mobile robotics, and medical robotics
Submission deadline: September 5th, 2018
Notification of acceptance: : September 20th, 2018


Tentative Program

9:00 – 9:15 Welcome by the organizers
9:15 – 9:50
Paolo Robuffo Giordano (CNRS, France), “Blending Human Assistance and Local Autonomy for Advanced Telemanipulation”
Nowadays and future robotics applications are expected to address more and more complex tasks in increasingly unstructured environments and in co-existence or co-operation with humans. Achieving full autonomy is clearly a “holy grail” for the robotics community: however, one could easily argue that real full autonomy is, in practice, out of reach for many years to come. The leap between the cognitive skills (e.g., perception, decision making, general ”scene understanding”) of us humans w.r.t. those of the most advanced nowadays robots is still huge. In most applications involving tasks in unstructured environments, uncertainty, and interaction with the physical word, human assistance is still necessary, and will probably be for the next decades.

These considerations motivate research efforts into the (large) topic of shared control for complex robotics systems: on the one hand, empower robots with a large degree of autonomy for allowing them to effectively operate in non-trivial environments. On the other hand, include human users in the loop for having them in (partial) control of some aspects of the overall robot behavior.

In this talk I will then review several recent results on novel shared control architectures meant to blend together diverse fields of robot autonomy (sensing, planning, control) for providing a human operator an easy “interface” for commanding the robot at high-level. Applications to the control of a dual-arm manipulator systems for remote telemanipulation will be illustrated.

9:50 – 10:25
Sylvain Calinon (Idiap Research Institute, Switzerland), “Learning from demonstration extended to shared control and teleoperation”
Many human-centered robotic applications would benefit from the development of robots that can acquire new movements and skills from human demonstration and that can reproduce these movements in new situations. Such learning and adaptation challenges require the development of intuitive interfaces to acquire meaningful demonstrations, the development of movement primitive representations that can exploit the structure and geometry of the acquired data in an efficient way, and the development of shared control techniques that can exploit the possible variations and coordinations in the movement. Moreover, the developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, exploration). For the reproduction of skills, these models need to be enriched with force and impedance information to enable human-robot collaboration and to generate safe and natural movements.

I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various shared control applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (prosthetic hand control from EMG and tactile sensing), or far from us (teleoperation with haptic feedback of bimanual robot in deep water).

10:25 – 11:00
David Abbink (Delft University of Technology, The Netherlands), “A sensori-motor perspective of haptic shared control: towards individualised and mutually adaptive human-robot interaction”
This talk will give an overview of haptic shared control approaches developed at the Delft Haptics Lab, that range from solutions to assist a human operator in controlling tele-operated robotic arms, towards supporting a driver to interact with a highly-automated vehicle. The basic premise of our work is that haptic communication allows the human and robot to shape each other’s control actions, and foster mutual awareness of each other’s capabilities, limitations and intent.
First, the theoretical underpinnings of our design and evaluation philosophy will be explained by using a hierarchical framework of shared control, as well as by a basic understanding of human sensorimotor control. Second, with the goal to facilitate translational research, several design options of haptic shared control will be discussed, illustrated with case studies to evaluate their efficacy. Third, we have taken the perspective to design the underlying controllers based on cybernetic models of the operator, allowing individualisation of haptic assistance, offline and even in real-time. This opens up possibilities for mutually adaptive interaction between human and robot, illustrated by recent studies in the domains of driving and tele-operation.
11:00 – 11:30 Break & Coffee
11:30 – 12:05
Allison M. Okamura (Stanford University, USA), “Haptic Communication for Human-Mobile Robot Interaction”
Besides accomplishing useful tasks autonomously, mobile robots in human environments need to interact and communicate with people. Our goal is to use haptic feedback to facilitate communication between human users and mobile robots. We first present our design and evaluation of a “virtual tether” system for a mobile robot assistant that follows a person. The virtual tether can establish two-way communication that provides the robot’s status to the user and enables the user to direct the robot when necessary (e.g., when the robot fails to follow the user due to unexpected obstacles). The tether system consists of a haptic interface that displays touch cues to convey the robot’s status via asymmetric vibration and a command interface for teleoperating the robot follower. We tested the virtual tether system in user studies with a physical robot follower and showed that users can focus on their task better, respond to robot failure events faster, and adapt to the robot’s limitations with the tether system. We then explore the application of haptic communication in indoor robot navigation. We showed that by using haptic feedback to convey high level intent of the robot, human’s response to the robot movement can be altered. A predictive model was derived using experimental data, and we use this model in robot motion and communication planning.
12:05 – 12:40
André Schiele TU Delft , “From teleoperation between Space and Earth to advanced haptic shared control”
In the past years, the research group led by Dr. Schiele has performed a number of technology demonstration experiments on-board the International Space Station, related with teleoperation of robotic assets on the surface of Earth from space. The stringent requirements related with time-delay and the dicated limitation of data bandwidth have led to the development of teleoperation control methods robust to time-delay and have furthermore led to a unification of teleoperation that is mixed with autonomous control, termed haptic shared control. Methods and modelling frameworks have been conceived that allowed to model both types of systems in a more unified manner. The 4-channel architecture notation has been augmented to also qualify as a tool for analyzing control behaviour with mixed autonomous and teleoperated systems. This presentation will show the various control methods that are used to increasingly cope with larger time-delays in real experiments and will outline first work with a novel modelling framework that enables the theoretical prediction of shared control system behaviour.
12:40 – 13:15
Ferdinando Rodriguez y Baena (Imperial College, United Kingdom), “Haptics in Surgery: the MIM Lab’s Experience”
Reproduction of the sense of touch is an important attribute for today’s robotic surgical systems, as it is the key to improved realism, immersion, and an intuitive user experience. It is not always clear, however, how best to incorporate haptics within complex scenarios involving multiple degrees of redundancy, soft and deformable tissues, and fast dynamics, since the addition of actuators will pose important safety considerations. This presentation will explore the Mechatronics in Medicine Laboratory’s collective experience of kinaesthetic and proprioceptive feedback in surgery, over almost two decades of research and development. It will highlight examples of translational successes, new methods, and lessons learnt, with the aim to aid future explorations in this space.
13:15 – 13:30 Poster teasers
13:30 – 14:45 Lunch break
14:45 – 15:20
Antonio Franchi and Davide Bicego (CNRS, France), “Tele-MAGMaS: a Multiple Aerial Ground co-MAnipulator System”
The manipulation of large objects by robotic systems is a challenge for applications in the field of construction industry, industrial decommissioning (e.g. nuclear waste disposal), and Urban Search and Rescue (USAR). These applications share the feature of being dangerous workplace environments, and thus motivate the need of devising robotic solutions for replacing human presence. In the literature, manipulation of objects such as pipes, bars, beams, and metal frameworks is mostly addressed by using either only aerial manipulators (AMs), or only ground manipulators (GMs), each coming with their own sets of advantages and drawbacks. We present a novel class of systems which tackles the problem of manipulating long objects that cannot be grasped close to their center of mass. We propose a novel design and a modular control structure able to leverage the advantages of both AMs and GMs, while also allowing for operation at different autonomy levels. The proposed system is called Tele-MAGMaS, where MAGMaS stands for ‘Multi Aerial Ground Manipulator System’, and Tele reflects the tele-operation capabilities. In addition, we devised a versatile control framework implementing the different modes of i) full autonomy (supervised), ii) tele-operation, and iii) shared control of the system, thus allowing to cope with the different complexity levels of various environments by also leveraging (when needed) the cognitive abilities of a human operator.
15:20 – 15:55
Selma Music and Sandra Hirche (TUM, Germany), “Shared control for semi-autonomous robot team teleoperation through wearable haptics “
Teleoperation of robot teams is beneficial in cooperative tasks, e.g. cooperative manipulation of heavy and large objects in remote or dangerous environments. However, since robot teams can have a relatively high number of degrees of freedom, compared to the human operator, the teleoperation needs to be established with lower-dimensional human commands and feedback signals. This challenge can be resolved by effectively combining cognitive capabilities of the human and autonomous capabilities of the robot team, through shared control strategies. We developed a shared control approach which allows simultaneous execution of subtasks necessary to accomplish teleoperation goal. Control over subtasks is allocated to the human or the robots’ autonomy. Furthermore, in order to provide feedback of multi-contact interaction in cooperative manipulation and extended workspace to the human operator, we explore the suitability of wearable haptic devices in teleoperation.
15:55 – 16:30
Cristian Secchi (University of Modena and Reggio Emilia, Italy), “Energy-based shared control for multi-robot systems”
Passivity based control has been successfully exploited for the implementation of single master/single-slave bilateral teleoperation systems over the last decades. Nevertheless, when coming to the teleoperation of multiple-slaves, it is necessary to introduce some autonomy at the slave side in order to reduce the number of DOFs that need to be mastered by the human. Such autonomy often gets in contrast with the constraints imposed by standard passivity based control. In this talk, I will introduce the concept of energy tank and I will show how to use it for teleoperating a multi-robot team. The resulting shared control architecture allows the flexibility required by the autonomy of the slave side while preserving a passive behavior of the overall teleoperation system.
16:30 – 17:00 Break & Coffee
17:00 – 19:00 Conclusions & Interactive sessions


The workshop will be held on October 1 2018, as part of the IEEE/RSJ IROS conference (https://www.iros2018.org/).


Dr. Marco Cognetti, Centre National de la Recherche Scientifique (CNRS)

Marco is a PostDoc at the Centre National de la Recherche Scientifique (CNRS) at Irisa and Inria Rennes. He received the Ph.D. degree in Control Engineering from Sapienza University of Rome, Italy, in 2016. In 2011, he was a Visiting Scholar at the Human-Robot Interaction group at the Max Planck Institute for Biological Cybernetics (MPI-KYB), Tubingen, Germany. In 2015, he was Visiting Scholar at the Personal Robotics Lab of The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA. His research interests include state estimation, shared control, multi-robot control and localization and whole-body motion planning and control for mobile robots.

Email: marco.cognetti@irisa.fr

Prof. Jee-Hwan Ryu, Korea University of Technology and Education (KOREATECH)

Jee-Hwan is a Professor in the Department of Mechanical Engineering, KOREATECH, Cheonan, South Korea. He received the B.S. degree in mechanical engineering from Inha University, Incheon, South Korea, in 1995, and the M.S. and Ph.D. degrees in mechanical engineering from Korea Advanced Institute of Science and Technology, Taejon, South Korea, in 1995 and 2002, respectively. His research interests include haptics, telerobotics, exoskeletons, and autonomous vehicles.

Email: jhryu@kut.ac.kr

Prof. Domenico Prattichizzo, University of Siena (UNISI) and Italian Institute of Technology (IIT)

Domenico is Professor of Robotics at the University of Siena, Senior Scientist at the Istituto Italiano di Tecnologia in Genova, Fellow of the IEEE society, and Co-founder of the startup WEART, a startup for VR and AR applications. Human and robotic hands together with the art of manipulating real and virtual objects have always polarized his research that has recently focused on wearable haptics, VR/AR and wearable robotics. He founded the SIRSLab where he has been leading an extraordinary and enthusiastic research team for years. He has held many plenary talks on robotics, including the most recent at the International Conference on Robotics and Automation (2016) and the International Conference on Asia Haptics (2017) where he won the award fir his research activities in virtual reality. He has been selected among the best two Cross-Cutting Challenges Initiatives at the IEEE Haptic Symposium 2018 in San Francisco (http://2018.hapticssymposium.org/crosscuttingchallenges) with the theme “The path to intelligent clothes and objects able to change the way we communicate with the world”. Author of more than 250 scientific articles in the field of robotics and virtual reality. He is a Fellow of the IEEE.

Email: prattichizzo@diism.unisi.it

Dr. Claudio Pacchierotti, Centre National de la Recherche Scientifique (CNRS)

Claudio is a CNRS Chargé de Recherche (CRCN) in Rennes, France, since December 2016. He received the B.S., M.S., and Ph.D. degrees from the University of Siena, Italy in 2009, 2011, and 2014, respectively. He has been a postdoctoral researcher of the Dept. of Advanced Robotics of the Italian Institute of Technology, Genova, Italy in 2015 and 2016. He visited the University of Padua, University of Pennsylvania, and the University of Twente in 2013, 2014, and 2015, respectively. He received the 2014 EuroHaptics Best PhD Thesis Award for the best doctoral thesis in the field of haptics, and the 2015 Meritorious Service Award for his work as a Reviewer for the IEEE Transactions on Haptics. He has also been Associate Editor for the 2017 IEEE World Haptics conference as well as the Publicity Chair for the 2017 IEEE World Haptics and 2018 Asia Haptics conferences. He is the Chair of the IEEE Technical Committee on Haptics.

Email: claudio.pacchierotti@irisa.fr



Comments are closed.