skip to main content
10.1145/3574131.3574428acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article

Grasping 3D Objects With Virtual Hand in VR Environment

Published: 13 January 2023 Publication History

Abstract

Virtual hands are typically hidden after grasping an object in Virtual Reality (VR) environments in existing virtual hand grasping works, which significantly impact user experience. In this paper, we build a real-time, flexible, robust and natural virtual hand grasping system, in which the virtual hand is an avatar of the user’s real hand, and a user can control the virtual hand to grasp different rigid objects naturally and realistically in the VR environment by grasping the controller handle. Our method involves three modules: Grasping Detection module for detecting whether there is a graspable object, Hand-Object Connection module for attaching the graspable object to the virtual hand, and Finger Bending module for controlling fingers to bend to grasp objects. We conduct experiments on using virtual hand to grasp rigid objects and manipulate physical tools, and show that the virtual hand can fit the object very well. Also, we compare our method with the above hand hidden technique using two created VR scenarios, which demonstrates the superiority of our method.

References

[1]
Yahya Aydin and Masayuki Nakajima. 1999. Database guided computer animation of human grasping using forward and inverse kinematics. Computers & Graphics 23, 1 (1999), 145–154.
[2]
Adnane Boukhayma, Rodrigo de Bem, and Philip HS Torr. 2019. 3d hand shape and pose from images in the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10843–10852.
[3]
Samarth Brahmbhatt, Ankur Handa, James Hays, and Dieter Fox. 2019. Contactgrasp: Functional multi-finger grasp synthesis from contact. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2386–2393.
[4]
Samarth Brahmbhatt, Chengcheng Tang, Christopher D Twigg, Charles C Kemp, and James Hays. 2020. ContactPose: A dataset of grasps with object contact and hand pose. In European Conference on Computer Vision. Springer, 361–378.
[5]
Yujun Cai, Liuhao Ge, Jianfei Cai, and Junsong Yuan. 2018. Weakly-supervised 3d hand pose estimation from monocular rgb images. In Proceedings of the European Conference on Computer Vision. 666–682.
[6]
Liuhao Ge, Zhou Ren, Yuncheng Li, Zehao Xue, Yingying Wang, Jianfei Cai, and Junsong Yuan. 2019. 3d hand shape and pose estimation from a single rgb image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10833–10842.
[7]
Oliver Glauser, Shihao Wu, Daniele Panozzo, Otmar Hilliges, and Olga Sorkine-Hornung. 2019. Interactive hand pose estimation using a stretch-sensing soft glove. ACM Transactions on Graphics 38, 4 (2019), 1–15.
[8]
Corey Goldfeder, Peter K Allen, Claire Lackner, and Raphael Pelossof. 2007. Grasp planning via decomposition trees. In Proceedings 2007 IEEE International Conference on Robotics and Automation. IEEE, 4679–4684.
[9]
Patrick Grady, Chengcheng Tang, Christopher D Twigg, Minh Vo, Samarth Brahmbhatt, and Charles C Kemp. 2021. Contactopt: Optimizing contact to improve grasps. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1471–1481.
[10]
Shreyas Hampali, Mahdi Rad, Markus Oberweger, and Vincent Lepetit. 2020. Honnotate: A method for 3d annotation of hand and object poses. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3196–3206.
[11]
Fumihito Kyota and Suguru Saito. 2012. Fast grasp synthesis for various shaped objects. In Computer graphics forum, Vol. 31. Wiley Online Library, 765–774.
[12]
Ying Li, Jiaxin L Fu, and Nancy S Pollard. 2007. Data-driven grasp synthesis using shape matching and task-based pruning. IEEE Transactions on visualization and computer graphics 13, 4(2007), 732–747.
[13]
Lorraine Lin, Aline Normoyle, Alexandra Adkins, Yu Sun, Andrew Robb, Yuting Ye, Massimiliano Di Luca, and Sophie Jörg. 2019. The effect of hand size and interaction modality on the virtual hand illusion. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 510–518.
[14]
Jameel Malik, Ahmed Elhayek, Fabrizio Nunnari, Kiran Varanasi, Kiarash Tamaddon, Alexis Heloir, and Didier Stricker. 2018. Deephps: End-to-end estimation of 3d hand pose and shape by learning from synthetic depth. In 2018 International Conference on 3D Vision (3DV). IEEE, 110–119.
[15]
Andrew T Miller, Steffen Knoop, Henrik I Christensen, and Peter K Allen. 2003. Automatic grasp planning using shape primitives. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Vol. 2. IEEE, 1824–1829.
[16]
Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, and Christian Theobalt. 2018. Ganerated hands for real-time 3d hand tracking from monocular rgb. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 49–59.
[17]
Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, and Christian Theobalt. 2017. Real-time hand tracking under occlusion from an egocentric rgb-d sensor. In Proceedings of the IEEE International Conference on Computer Vision. 1154–1163.
[18]
Sergiu Oprea, Pablo Martinez-Gonzalez, Alberto Garcia-Garcia, John A Castro-Vargas, Sergio Orts-Escolano, and Jose Garcia-Rodriguez. 2019a. A visually realistic grasping system for object manipulation and interaction in virtual reality environments. Computers & Graphics 83(2019), 77–86.
[19]
Sergiu Oprea, Pablo Martinez-Gonzalez, Alberto Garcia-Garcia, John A Castro-Vargas, Sergio Orts-Escolano, and Jose Garcia-Rodriguez. 2019b. A visually realistic grasping system for object manipulation and interaction in virtual reality environments. Computers & Graphics 83(2019), 77–86.
[20]
Paschalis Panteleris, Iason Oikonomidis, and Antonis Argyros. 2018. Using a single rgb frame for real time 3d hand pose estimation in the wild. In 2018 IEEE Winter Conference on Applications of Computer Vision. IEEE, 436–445.
[21]
Tu-Hoa Pham, Abderrahmane Kheddar, Ammar Qammaz, and Antonis A Argyros. 2015. Towards force sensing from vision: Observing hand-object interactions to infer manipulation forces. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2810–2819.
[22]
Wei Quan, He Yang, Cheng Han, and Yinong Li. 2020. Realistic Interaction System for Human Hand in Virtual Environments. In 2020 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS). 772–778. https://doi.org/10.1109/ICITBS49701.2020.00169
[23]
Grégory Rogez, James S Supancic, and Deva Ramanan. 2015. First-person pose recognition using egocentric workspaces. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4325–4333.
[24]
Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. 2019. Grasping microgestures: Eliciting single-hand microgestures for handheld objects. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[25]
Peng Song, Zhongqi Fu, and Ligang Liu. 2018. Grasp planning via hand-object geometric fitting. The Visual Computer 34, 2 (2018), 257–270.
[26]
Subramanian Sundaram, Petr Kellnhofer, Yunzhu Li, Jun-Yan Zhu, Antonio Torralba, and Wojciech Matusik. 2019. Learning the signatures of the human grasp using a scalable tactile glove. Nature 569, 7758 (2019), 698–702.
[27]
Omid Taheri, Vasileios Choutas, Michael J Black, and Dimitrios Tzionas. 2021. GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping. arXiv preprint arXiv:2112.11454(2021).
[28]
Omid Taheri, Nima Ghorbani, Michael J Black, and Dimitrios Tzionas. 2020. GRAB: A dataset of whole-body human grasping of objects. In European conference on computer vision. Springer, 581–600.
[29]
Xiao Tang, Xiaowei Hu, Chi-Wing Fu, and Daniel Cohen-Or. 2020. Grabar: occlusion-aware grabbing virtual objects in AR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 697–708.
[30]
Min-Yu Wu, Pai-Wen Ting, Ya-Hui Tang, En-Te Chou, and Li-Chen Fu. 2020. Hand pose estimation in object-interaction based on deep learning for virtual reality applications. Journal of Visual Communication and Image Representation 70 (2020), 102802.
[31]
Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, and Cewu Lu. 2021. CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11097–11106.
[32]
Qi Ye and Tae-Kyun Kim. 2018. Occlusion-aware hand pose estimation using hierarchical mixture density network. In Proceedings of the European conference on computer vision. 801–817.
[33]
Shanxin Yuan, Guillermo Garcia-Hernando, Björn Stenger, Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee, Pavlo Molchanov, Jan Kautz, Sina Honari, Liuhao Ge, 2018. Depth-based 3d hand pose estimation: From current achievements to future goals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2636–2645.
[34]
Lizhi Zhao, Xuequan Lu, Min Zhao, and Meili Wang. 2021. Classifying In-Place Gestures with End-to-End Point Cloud Learning. In 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 229–238.
[35]
Wenping Zhao, Jianjie Zhang, Jianyuan Min, and Jinxiang Chai. 2013. Robust realtime physics-based motion control for human grasping. ACM Transactions on Graphics 32, 6 (2013), 1–12.
[36]
Christian Zimmermann and Thomas Brox. 2017. Learning to estimate 3d hand pose from single rgb images. In Proceedings of the IEEE international conference on computer vision. 4903–4911.
[37]
Christian Zimmermann, Duygu Ceylan, Jimei Yang, Bryan Russell, Max Argus, and Thomas Brox. 2019. Freihand: A dataset for markerless capture of hand pose and shape from single rgb images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 813–822.

Cited By

View all

Index Terms

  1. Grasping 3D Objects With Virtual Hand in VR Environment

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    VRCAI '22: Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry
    December 2022
    284 pages
    ISBN:9798400700316
    DOI:10.1145/3574131
    • Editors:
    • Enhua Wu,
    • Lionel Ming-Shuan Ni,
    • Zhigeng Pan,
    • Daniel Thalmann,
    • Ping Li,
    • Charlie C.L. Wang,
    • Lei Zhu,
    • Minghao Yang
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 January 2023

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Hand-Object Interaction
    2. Virtual Hands
    3. Virtual Reality

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • 2021 Science and Technology Innova-tion Program of Shaanxi Academy of Forestry Science

    Conference

    VRCAI '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 51 of 107 submissions, 48%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)147
    • Downloads (Last 6 weeks)22
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media