skip to main content
10.1145/1111360.1111370acmotherconferencesArticle/Chapter ViewAbstractPublication PagesclihcConference Proceedingsconference-collections
Article

On the usability of gesture interfaces in virtual reality environments

Published: 23 October 2005 Publication History

Abstract

This paper discusses several usability issues related to the use of gestures as an input mode in multimodal interfaces. The use of gestures has been suggested before as a natural solution for applications that require hands-free and notouch interaction with computers, such as in virtual reality (VR) environments. We introduce a simple but robust 2D computer vision based gesture recognition system that was successfully used for interaction in VR environments such as CAVEs and Powerwalls. This interface was tested under 3 different scenarios, as a regular pointing device in a GUI interface, as a navigation tool, and as a visualization tool. Our experiments show that the time to completion of simple pointing tasks is considerably slower when compared to a mouse and that its use during even short periods of time causes fatigue. Despite, these drawbacks, the use of gestures as an alternative mode in multimodal interfaces offers several advantages, such as quick access to computing resources that might be embedded in the environment, using a natural and intuitive way, and that scales nicely to group and collaborative applications, where gestures can be used sporadically.

References

[1]
M. C. Cabral. Interface baseada em gestos para ambientes de realidade virtual usando visão computacional. Master's thesis, Departamento de Ciência da Computação, Universidade de São Paulo, 2005.
[2]
J. Cassell. A framework for gesture generation and interpretation. In R. Cipolla and A. Pentland, editors, Computer Vision in Human-Machine Interaction, New York: Cambridge University Press, 1998.
[3]
G. Cheung, S. Baker, and T. Kanade. Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. CVPR, 2003.
[4]
C. Cruz-Neira, D. Sandin, and T. DeFanti. Surround-screen projection-based virtual reality: The design and implementation of the cave. ACM SIGGRAPH, July 1993.
[5]
P. M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology: Human Perception and Performance, 47, 1954.
[6]
W. T. Freeman and C. D. Weissman. Television control by hand gestures. In Proc. of the Int. Workshop on Automatic Face and Gesture Recognition, Zurich, Switzerland, June 1995.
[7]
P. Hong, M. Turk, and T. Huang. Gesture modeling and recognition using finite state machines. Proc. 4th IEEE Int. Conf. on Face and Gesture Recognition, 2000.
[8]
C. Hummels and P. J. Stapers. Meaningful gestures for human computer interaction. In Proc. of the 3rd International Conference on Automatic Face and Gesture Recognition, Nara, Japan, April 1998. Computer Society Press, Los Alamitos, CA.
[9]
Point Grey Research Inc. Dragon fly firewire camera specifications - http://www.ptgrey.com/products/dragonfly/specifications.html.
[10]
S. Keates and P. Robinson. The use of gestures in multimodal input. In Proc. of the ACM SIGCAPH ASSETS 98, 1998.
[11]
R. Kehl and L. V. Gool. Real-time pointing gesture recognition for an immersive environment. IEEE Intl. Conference on Automatic Face and Gesture Recognition, 2004.
[12]
F. C. M. Kjeldsen. Visual interpretation of hand gestures as a practical interface modality. Phd Thesis - Columbia University, 1997.
[13]
D. Lemmerman, A. Forsberg, L. Monroe, K. Chartrand, B. Greene, D. Modl, and J. Olivares. Towards human-centered user interfaces in an immersive context. Proc. IEEE VR 2004 - Beyond Wand and Glove Based Interaction, 2004.
[14]
C. Leubner, C. Brockmann, and H. Muller. Computer-vision-based human-computer interaction with a back projection wall using arm gestures. Euromicro Conference, 2001.
[15]
S. MacKenzie, A. Sellen, and W. Buxton. A comparison of input devices in element pointing and dragging tasks. Proceedings of the SIGCHI conference on Human factors in computing systems: Reaching through technology, 1991.
[16]
I. Miki, M. Trivedi, E. Hunter, and P. Cosman. Human body model acquisition and tracking using voxel data. Int. Journal of CV, 53(3), 2003.
[17]
T. B. Moeslund, M. Störring, and E. Granum. A natural interface to a virtual environment through computer vision estimated pointing gestures. In I. Wachsmuth and T. Sowa, editors, Gesture and Sign Language in Human Computer Interaction: International Gesture Workshop, GW 2001, volume 2298, London, UK, April 2001. Springer LNA12298.
[18]
J. Nielsen. Usability Engineering. Morgan Kauffmann, San Diego, California, 1993.
[19]
M. Nielsen, T. Moeslund, M. Störring, and E. Granum. A procedure for developing intuitive and ergonomic gesture interfaces for human computer interaction. In Proc. of the 5th Internation Gesture Workshop, GW 2003, Genova, Italy, April 2003.
[20]
T. Starner and A. Pentland. Visual recognition of american sigh language using hidden markov models. IEEE Int. Symp. on Computer Vision, 1995.
[21]
N. Streitz, P. Tandler, C. Mller-Tomfelde, and S. Konomi. Roomware: Towards the next generation of human-computer interaction based on an integrated design of real and virtual worlds. In J. Carroll, editor, Human-Computer Interaction in the New Millenium. Addison-Wesley, 2001.
[22]
C. R. Wren, A. Azarbayejani, T. D., and A. Pentland. Pfinder: Real-time tracking of the human body. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(7), 1997.
[23]
J. Zuffo and M. K. Zuffo. Caverna digital - sistema de multiprojeção estereoscopico baseado em aglomerados de pcs para aplicações imersivas em realidade virtual. In 4th SBC Simposio de Realidade Virtual. Sociedade Brasileira de Computação, Outubro 2001.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
CLIHC '05: Proceedings of the 2005 Latin American conference on Human-computer interaction
October 2005
361 pages
ISBN:1595932240
DOI:10.1145/1111360
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Tecnologia Virtual
  • SIG-CHI Mexico
  • SIG-CHI Brazil
  • Create-Net
  • Microsoft Research: Microsoft Research
  • SMCC
  • ITESM Cuernavaca
  • Pullman de Morelos

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 October 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer vision
  2. usability of gesture interfaces
  3. virtual reality

Qualifiers

  • Article

Acceptance Rates

Overall Acceptance Rate 14 of 42 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)93
  • Downloads (Last 6 weeks)10
Reflects downloads up to 27 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media