Authors:
C. Siegl
;
J. Süßmuth
;
F. Bauer
and
M. Stamminger
Affiliation:
FAU Erlangen-Nuremberg, Germany
Keyword(s):
HCI, User Interface, Modeling, Interaction, 3D
Related
Ontology
Subjects/Areas/Topics:
Advanced User Interfaces
;
Computer Vision, Visualization and Computer Graphics
;
Evaluation of Human Performance and Usability in Virtual Environments
;
Interactive Environments
;
Sketch-Based Interfaces
Abstract:
Recently, 3D input devices such as the Microsoft Kinect sensor or the Leap Motion controller became increasingly popular - the later specialized in recognizing hand-gestures, advertising a very precise localization of tools and fingers. Such devices promise to enable a touchless interaction with the computer in three-dimensional space enabling the design of entirely new user interfaces for natural 3D-modeling. However, while implementing a modeling application we found that there are still fundamental problems that can not easily be solved: The lack of a precise and atomic gesture for enabling and disabling interaction (clicking gesture) and a poor human depth perception and localization within an invisible coordinate frame. In this paper, we show why the precision of the interaction is not limited by hardware but software constraints.