Gesture-based Interaction in Virtual Reality Using a Data Glove

Location: Inria Rennes / IRISA
Starting Date: From October 2020
Duration: up to six months

Contacts:

  • Ferran Argelaguet, (Hybrid), ferran.argelaguet@inria.fr
  • Maud Marchal, (Rainbow), maud.marchal@irisa.fr
  • Claudio Pacchierotti, (Rainbow), claudio.pacchierotti@irisa.fr

The grasping [1] of virtual objects in virtual reality using data gloves can be proven extremely difficult due to two major challenges. First, current limitations of hand tracking systems (e.g. data glove) prevent from accurately represent the user’s hand configuration, mainly due to the limited number of degrees of freedom that they can capture and its accuracy (e.g. noise and tracking errors). Second, more related to this internship, there is still a need on robust and efficient methods able to support stable and natural grasps of virtual objects. Two different families of methods exist, physically-based methods [3] (i.e. try to reproduce the physics of real grasping mechanics by the simulation of the forces involved in the grasping process) and gesture-based methods [7] (i.e. use predefined grasping postures to grasp objects using discrete pick and release mechanisms).

Gesture-based methods will be the main focus of this internship, and in particularly, the study and design of 3D interaction techniques able to enable the user to grasp virtual objects using a wide variety of hand gestures. Gesture-based methods have three major components: detecting user intent [6] (i.e. which object the user wants to interact with), gesture recognition [4, 5] (i.e. classifying which is the hand gesture the user wants to use when grabbing the target object) and the pick/release trigger (i.e. determining when the user starts grabbing the object and when the user releases it). On top of this, visual feedback has to guide the user during the entire grasping task [2].

The goal of this internship is to explore current machine learning algorithms in order to support gesture-based interfaces providing intuitive and robust methods for the grasping of virtual objects. The candidate will have to (1) perform a state of the art of methods that could be used for determining users’ intent and hand gesture classification, (2) propose a proof-of-concept of a gesture-based interaction system and (3) evaluate experimentally the proof-of-concept to assess the user’s experience [8]. The candidate will have access to state of the art virtual reality equipment (head mounted displays, data gloves) in order to conduct this research.

The candidate should be comfortable with as much following items as possible:

  • Experience in the development of 3D/VR applications using Unity3D and C#.
  • Background in machine learning.
  • Background in computer graphics.
  • Good spoken and written English.
  • Good communication skills.

[1] T. Feix, J. Romero, H. B. Schmiedmayer, A. M. Dollar and D. Kragic, “The grasp taxonomy of human grasp types,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 1, pp. 66-77, 2016.
[2] M. Prachyabrued and C. Borst, “Design and evaluation of visual interpenetration cues in virtual grasping,” IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 6, pp. 1718-1731, 2016.
[3] C. W. Borst and A. P. Indugula. A Spring Model for Whole-Hand Virtual Grasping. Presence: Teleoperators and Virtual Environments 2006 15:1, 47-61
[4] Heumer, G., Amor, H. B., Weber, M., & Jung, B. (2007, March). Grasp recognition with uncalibrated data gloves-a comparison of classification methods. In 2007 IEEE Virtual Reality Conference (pp. 19-26). IEEE.
[5] Z. Ju , H. Liu , C. Zhu and Y. Xiong (2009) Dynamic Grasp Recognition Using Time Clustering, Gaussian Mixture Models and Hidden Markov Models, Advanced Robotics, 23:10, 1359-1371, DOI: 10.1163/156855309X462628
[6] F. Periverzov, and I. Horea. “IDS: the intent driven selection method for natural user interfaces.” 2015 IEEE symposium on 3D user interfaces (3DUI). IEEE, 2015.
[7] H. Tian, et al. “Realtime hand-object interaction using learned grasp space for virtual environments.” IEEE transactions on visualization and computer graphics 25.8 (2018): 2623-2635.
[8] F. Argelaguet, L. Hoyet, M. Trico and A. Lécuyer, “The role of interaction in virtual embodiment: Effects of the virtual hand representation,” 2016 IEEE Virtual Reality (VR), Greenville, SC, 2016, pp. 3-10, doi: 10.1109/VR.2016.7504682.
[9] Jörg, Sophie, Yuting Ye, Michael Neff, Franziska Mueller, and Victor Zordan. “Virtual hands in VR: motion capture, synthesis, and perception.” In ACM SIGGRAPH 2020 Courses, pp. 1-145. 2020.

Comments are closed.