skip to main content
10.1145/3290605.3300403acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Evaluating the Combination of Visual Communication Cues for HMD-based Mixed Reality Remote Collaboration

Published: 02 May 2019 Publication History

Abstract

Many researchers have studied various visual communication cues (e.g. pointer, sketching, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, with three problem tasks: Lego, Tangram, and Origami. The study results showed that the participants completed the task significantly faster and felt a significantly higher level of usability when the sketch cue is added to the hand gesture cue, but not with adding the pointer cue. Participants also preferred the combinations including hand and sketch cues over the other combinations. However, using additional cues (pointer or sketch) increased the perceived mental effort and did not improve the feeling of co-presence. We discuss the implications of these results and future research directions.

Supplementary Material

ZIP File (paper173pvc.zip)
Preview video captions
AVI File (paper173.avi)
Supplemental video
MP4 File (paper173p.mp4)
Preview video

References

[1]
Leila Alem, Franco Tecchia, and Weidong Huang. 2011. Remote teleassistance system for maintenance operators in mines (11th Underground Coal Operators' Conference). University of Wollongong.
[2]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining What Individual SUS Scores Mean: Adding an Adjective Rating Scale. J. Usability Studies 4, 3 (May 2009), 114--123. http://dl.acm.org/citation. cfm?id=2835587.2835589
[3]
John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4--7.
[4]
Sicheng Chen, Miao Chen, Andreas Kunz, Asim Evren Yantaç, Mathias Bergmark, Anders Sundin, and Morten Fjeld. 2013. SEMarbeta: Mobile Sketch-gesture-video Remote Support for Car Drivers. In Proceedings of the 4th Augmented Human International Conference (AH '13). ACM, New York, NY, USA, 69--76.
[5]
Leandro L Di Stasi, Michael B McCamy, Andrés Catena, Stephen L Macknik, José J Canas, and Susana Martinez-Conde. 2013. Microsaccade and drift dynamics reflect mental fatigue. European Journal of Neuroscience 38, 3 (2013), 2389--2398.
[6]
Omid Fakourfar, Kevin Ta, Richard Tang, Scott Bateman, and Anthony Tang. 2016. Stabilized Annotations for Mobile Remote Assistance. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1548--1560.
[7]
Fove. 2018. FOVE head mounted display website. https://www.getfove. com/. Accessed: 2018--12--28.
[8]
HS Friedman. 1979. The concept of skill in nonverbal communication: Implications for understanding social interaction. Skill in nonverbal communication (1979), 2--27.
[9]
Susan R. Fussell, Leslie D. Setlock, and Robert E. Kraut. 2003. Effects of Head-mounted and Scene-oriented Video Systems on Remote Collaboration on Physical Tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). ACM, New York, NY, USA, 513--520.
[10]
Susan R. Fussell, Leslie D. Setlock, Jie Yang, Jiazhi Ou, Elizabeth Mauer, and Adam D. I. Kramer. 2004. Gestures Over Video Streams to Support Remote Collaboration on Physical Tasks. Human-Computer Interaction 19, 3 (2004), 273--309.
[11]
Lei Gao, Huidong Bai, Gun Lee, and Mark Billinghurst. 2016. An Oriented Point-cloud View for MR Remote Collaboration. In SIGGRAPH ASIA 2016 Mobile Graphics and Interactive Applications (SA '16). ACM, New York, NY, USA, Article 8, 4 pages.
[12]
Steffen Gauglitz, Cha Lee, Matthew Turk, and Tobias Höllerer. 2012. Integrating the Physical Environment into Mobile Remote Collaboration. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services (MobileHCI '12). ACM, New York, NY, USA, 241--250.
[13]
Steffen Gauglitz, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer. 2014. World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 449--459.
[14]
Susan Goldin-Meadow. 1999. The role of gesture in communication and thinking. Trends in Cognitive Sciences 3, 11 (1999), 419 -- 429.
[15]
Saul Greenberg and Carl Gutwin. 2016. Implications of We-Awareness to the Design of Distributed Groupware Tools. Computer Supported Cooperative Work (CSCW) 25, 4 (01 Oct 2016), 279--293.
[16]
K. Gupta, G. A. Lee, and M. Billinghurst. 2016. Do You See What I See? The Effect of Gaze Tracking on Task Space Remote Collaboration. IEEE Transactions on Visualization and Computer Graphics 22, 11 (Nov 2016), 2413--2422.
[17]
Professor Chad Harms and Professor Frank Biocca. 2004. Internal Consistency and Reliability of the Networked Minds Measure of Social Presence, Mariano Alcaniz and Beatriz Rey (Eds.).
[18]
Joerg Hauber. 2008. Understanding Remote Collaboration in Video Collaborative Virtual Environments. University of Canterbury Press, Christchurch New Zealand. Doctoral dissertation from University of Canterbury.
[19]
Keita Higuch, Ryo Yonetani, and Yoichi Sato. 2016. Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 5180--5190.
[20]
Weidong Huang and Leila Alem. 2013. Gesturing in the air: supporting full mobility in remote collaboration on physical tasks. Journal of Universal Computer Science 19, 8 (2013), 1158--1174.
[21]
Weidong Huang, Leila Alem, Franco Tecchia, and Henry Been-Lirn Duh. 2018. Augmented 3D hands: a gesture-based mixed reality system for distributed collaboration. Journal on Multimodal User Interfaces 12, 2 (01 Jun 2018), 77--89.
[22]
Hiroshi Ishii, Minoru Kobayashi, and Jonathan Grudin. 1993. Integration of Interpersonal Space and Shared Workspace: ClearBoard Design and Experiments. ACM Trans. Inf. Syst. 11, 4 (Oct. 1993), 349--375.
[23]
Shunichi Kasahara and Jun Rekimoto. 2014. JackIn: Integrating Firstperson View with Out-of-body Vision Generation for Human-human Augmentation. In Proceedings of the 5th Augmented Human International Conference (AH '14). ACM, New York, NY, USA, Article 46, 8 pages.
[24]
H. Kato and M. Billinghurst. 1999. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99). 85--94.
[25]
Seungwon Kim, Mark Billinghurst, Chilwoo Lee, and Gun Lee. 2018. Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration. KSII Transactions on Internet and Information Systems 12, 12 (Dec 2018), 6034--6056.
[26]
Seungwon Kim, Mark Billinghurst, and Gun Lee. 2018. The Effect of Collaboration Styles and View Independence on Video-Mediated Remote Collaboration. Computer Supported Cooperative Work (CSCW) 27, 3 (01 Dec 2018), 569--607.
[27]
S. Kim, G. Lee, N. Sakata, and M. Billinghurst. 2014. Improving co-presence with augmented visual communication cues for sharing experience through video conference. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 83--92.
[28]
Seungwon Kim, Gun A. Lee, Sangtae Ha, Nobuchika Sakata, and Mark Billinghurst. 2015. Automatically Freezing Live Video for Annotation During Remote Collaboration. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 1669--1674.
[29]
Seungwon Kim, G. A. Lee, and N. Sakata. 2013. Comparing pointing and drawing for remote collaboration. In 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 1--6.
[30]
S. Kim, G. A. Lee, N. Sakata, A. DÃnser, E. Vartiainen, and M. Billinghurst. 2013. Study of augmented gesture communication cues and view sharing in remote collaboration. In 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). 261--262.
[31]
David Kirk, Andy Crabtree, and Tom Rodden. 2005. Ways of the Hands. In ECSCW 2005, Hans Gellersen, Kjeld Schmidt, Michel BeaudouinLafon, and Wendy Mackay (Eds.). Springer Netherlands, Dordrecht, 1--21.
[32]
David Kirk, Tom Rodden, and Danaë Stanton Fraser. 2007. Turn It This Way: Grounding Collaborative Action with Remote Gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07). ACM, New York, NY, USA, 1039--1048.
[33]
David Kirk and Danae Stanton Fraser. 2006. Comparing Remote Gesture Technologies for Supporting Collaborative Physical Tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '06). ACM, New York, NY, USA, 1191--1200.
[34]
Adam D. I. Kramer, Lui Min Oh, and Susan R. Fussell. 2006. Using Linguistic Features to Measure Presence in Computer-mediated Communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '06). ACM, New York, NY, USA, 913--916.
[35]
Hideaki Kuzuoka, Gen Ishimoda, Yushi Nishimura, Ryutaro Suzuki, and Kimio Kondo. 1995. Can the GestureCam be a Surrogate?. In Proceedings of the Fourth European Conference on Computer-Supported Cooperative Work ECSCW'95. Springer, 181--196.
[36]
LeapMotion. 2014. Leap Motion Pinch Draw Demo. https://gallery. leapmotion.com/pinch-draw/. Accessed: 2018--12--28.
[37]
LeapMotion. 2018. Leap Motion hand tracking website. https://www. leapmotion.com/. Accessed: 2018--12--28.
[38]
Gun Lee, Theophilus Hua Lid Teo, Seungwon Kim, and Mark Billinghurst. 2018. A User Study on MR Remote Collaboration using Live 360 Video. In Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2018. 153--164.
[39]
Gun A. Lee, Seungwon Kim, Youngho Lee, Arindam Dey, Thammathip Piumsomboon, Mitchell Norman, and Mark Billinghurst. 2017. Improving Collaboration in Augmented Video Conference using Mutually Shared Gaze. In ICAT-EGVE 2017 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, Robert W. Lindeman, Gerd Bruder, and Daisuke Iwai (Eds.). The Eurographics Association.
[40]
Meta2. 2018. Meta2 augmented reality head mounted display website. https://www.metavision.com/. Accessed: 2018--12--28.
[41]
Shohei Nagai, Shunichi Kasahara, and Jun Rekimoto. 2015. LiveSphere: Sharing the Surrounding Visual Environment for Immersive Experience in Remote Collaboration. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15). ACM, New York, NY, USA, 113--116.
[42]
Nobuchika Sakata, Takeshi Kurata, Takekazu Kato, Masakatsu Kourogi, and Hideaki Kuzuoka. 2003. WACL: Supporting telecommunications using wearable active camera with laser pointer. In null. IEEE, 53.
[43]
Eva Siegenthaler, Francisco M Costela, Michael B McCamy, Leandro L Di Stasi, Jorge Otero-Millan, Andreas Sonderegger, Rudolf Groner, Stephen Macknik, and Susana Martinez-Conde. 2014. Task difficulty in mental arithmetic affects microsaccadic rates and magnitudes. European Journal of Neuroscience 39, 2 (2014), 287--294.
[44]
Rajinder S. Sodhi, Brett R. Jones, David Forsyth, Brian P. Bailey, and Giuliano Maciocci. 2013. BeThere: 3D Mobile Collaboration with Spatial Input. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 179--188.
[45]
M. Stefik, D. G. Bobrow, G. Foster, S. Lanning, and D. Tatar. 1987. WYSIWIS Revised: Early Experiences with Multiuser Interfaces. ACM Trans. Inf. Syst. 5, 2 (April 1987), 147--167.
[46]
Matthew Tait and Mark Billinghurst. 2015. The Effect of View Independence in a Collaborative AR System. Computer Supported Cooperative Work (CSCW) 24, 6 (01 Dec 2015), 563--589.
[47]
John C. Tang and Scott L. Minneman. 1990. VideoDraw: A Video Interface for Collaborative Drawing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '90). ACM, New York, NY, USA, 313--320.
[48]
Theophilus Teo, Gun Lee, Mark Billinghurst, and Matt Adcock. 2018. Hand Gestures and Visual Annotation in Live 360 Panorama-based Mixed Reality Remote Collaboration. In in Proceedings of the 30th Australian Computer-Human Interaction Conference (OzCHI'18). ACM, New York, NY, USA, 406--410.
[49]
Frank Weichert, Daniel Bachmann, BartholomÃus Rudak, and Denis Fisseler. 2013. Analysis of the Accuracy and Robustness of the Leap Motion Controller. Sensors 13, 5 (2013), 6380--6393.
[50]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 143--146.
[51]
Ferdinand Rudolf Hendrikus Zijlstra. 1993. Efficiency in work behavior. A design approach for modern tools. Delft University of Technology, Delft, The Netherlands. Doctoral dissertation from Delft University of Technology.
[52]
Ferdinand Rudolf Hendrikus Zijlstra and L van Doorn. 1985. The construction of a scale to measure subjective effort. Delft, Netherlands (1985), 43.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
May 2019
9077 pages
ISBN:9781450359702
DOI:10.1145/3290605
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 May 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. co-presence
  2. mixed reality
  3. remote collaboration
  4. usability
  5. visual communication cue

Qualifiers

  • Research-article

Funding Sources

  • National Research Foundation of Korea (NRF)

Conference

CHI '19
Sponsor:

Acceptance Rates

CHI '19 Paper Acceptance Rate 703 of 2,958 submissions, 24%;
Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)257
  • Downloads (Last 6 weeks)22
Reflects downloads up to 05 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media