skip to main content
research-article
Open access

Optimistic Programming of Touch Interaction

Published: 25 August 2014 Publication History

Abstract

Touch-sensitive surfaces have become a predominant input medium for computing devices. In particular, multitouch capability of these devices has given rise to developing rich interaction vocabularies for “real” direct manipulation of user interfaces. However, the richness and flexibility of touch interaction often comes with significant complexity for programming these behaviors. Particularly, finger touches, though intuitive, are imprecise and lead to ambiguity. Touch input often involves coordinated movements of multiple fingers as opposed to the single pointer of a traditional WIMP interface. It is challenging in not only detecting the intended motion carried out by these fingers but also in determining the target objects being manipulated due to multiple focus points. Currently, developers often need to build touch behaviors by dealing with raw touch events that is effort consuming and error-prone. In this article, we present Touch, a tool that allows developers to easily specify their desired touch behaviors by demonstrating them live on a touch-sensitive device or selecting them from a list of common behaviors. Developers can then integrate these touch behaviors into their application as resources and via an API exposed by our runtime framework. The integrated tool support enables developers to think and program optimistically about how these touch interactions should behave, without worrying about underlying complexity and technical details in detecting target behaviors and invoking application logic. We discuss the design of several novel inference algorithms that underlie these tool supports and evaluate them against a multitouch dataset that we collected from end users. We also demonstrate the usefulness of our system via an example application.

References

[1]
D. Ashbrook and T. Starner. 2010. MAGIC: A motion gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2159--2168.
[2]
H. Benko and D. Wigdor. 2010. Imprecision, inaccuracy, and frustration: The tale of touch input. In Tabletops: Horizontal Interactive Displays, C. Müller-Tomfelde (Ed). Springer, London, 249--275.
[3]
X. Bi, Y. Li, and S. Zhai. 2013. FFitts law: Modeling finger touch with Fitts’ law. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1363--1372.
[4]
X. Bi and S. Zhai. 2013. Bayesian touch: A statistic criterion of target selection with finger touch. In Proceedings of the 26th Annual Symposium on User Interface Software and Technology (UIST’13). 51--60.
[5]
A. Cypher. 1993. Watch What I Do: Programming by Demonstration MIT Press.
[6]
Gartner Says Worldwide PC, Tablet and Mobile Phone Combined Shipments to Reach 2.4 Billion Units in 2013. Available at: http://www.gartner.com/newsroom/id/2408515.
[7]
T. A. Hammond. 2007. Ladder: A Perceptually-based Language to Simplify Sketch Recognition User Interface Development. Massachusetts Institute of Technology, 1.
[8]
C. Harrison, R. Xiao, J. Schwarz, and S. E. Hudson. 2014. TouchTools: Leveraging familiarity and skill with physical tools to augment touch interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2913--2916.
[9]
J. R. Hershey and P. A. Olsen. 2007. Approximating the Kullback Leibler divergence between gaussian mixture models. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’07). IV-317.
[10]
C. Holz and P. Baudisch. 2011. Understanding touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2501--2510.
[11]
L. Hoste. 2010. Software engineering abstractions for the multi-touch revolution. In Proceedings of the 32nd ACM/EEEI International Conference on Software Engineering, vol. 2. ACM, 509--510.
[12]
S. Hudson and G. Newell. 1992. Probabilistic state machines: Dialog management for inputs with uncertainty. In Proceedings of the ACM Symposium on User Interface Software and Technology. 199--208
[13]
K. Kin, B. Hartmann, T. DeRose, and M. Agrawala. 2012. Proton: Multitouch gestures as regular expressions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2885--2894.
[14]
K. Kin, B. Hartmann, T. DeRose, and M. Agrawala. 2012. Proton++: A customizable declarative multitouch framework. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology. ACM, 477--486.
[15]
T. Lau. 2001. Programming by demonstration: A machine learning approach. In CSE. University of Washington, Seattle, WA.
[16]
Y. Li. 2010. Gesture search: A tool for fast mobile data access. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST’10). 87--96.
[17]
Y. Li. 2010. Protractor: A fast and accurate gesture recognizer. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’10). 2169--2172.
[18]
H. Lieberman. 2001. Your Wish Is My Command: Programming by Example. Morgan Kaufmann, San Francisco, CA
[19]
A. C. Long. 2001. Quill: A Gesture Design Tool for Pen-based User Interfaces. University of California, Berkeley, 292.
[20]
A. C. Long, J. A. Landay, and L. A. Rowe. 1999. Implications for a gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 40--47.
[21]
H. Lu and Y. Li. 2012. Gesture coder: A tool for programming multi-touch gestures by demonstration. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’12).
[22]
H. Lu and Y. Li. 2013. Gesture studio: Authoring multi-touch interactions through demonstration and composition. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’13).
[23]
T. Moscovich. 2007. Principles and Applications of Multi-Touch Interaction. Doctoral Dissertation, Brown University, Providence, RI.
[24]
K. Murphy. 2002. Dynamic bayesian networks: Representation, inference and learning. In Computer Science Division. Doctoral Dissertation, University of California, Berkeley.
[25]
K. P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. MIT Press.
[26]
D. Rubine. 1991. Specifying gestures by example. ACM SIGGRAPH Computer Graphics 25, 329--337.
[27]
S. Russell and P. Norvig. 2003. Probabilistic Reasoning over Time. Prentice Hall.
[28]
C. Scholliers, L. Hoste, B. Signer, and W. D. Meuter. 2011. Midas: A declarative multi-touch interaction framework. In Proceedings of the 5th International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 49--56.
[29]
J. Schwarz, S. Hudson, J. Mankoff, and A. D. Wilson. 2010. A framework for robust and flexible handling of inputs with uncertainty. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology. ACM, 47--56.
[30]
J. Schwarz, J. Mankoff, and S. Hudson. 2011. Monte Carlo methods for managing interactive state, action and feedback under uncertainty. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. ACM, 235--244.
[31]
D. Vogel and P. Baudisch. 2007. Shift: A technique for operating pen-based interfaces using touch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 657--666.
[32]
J. Williamson. 2006. Continuous Uncertain Interaction. Thesis, University of Glasgow.
[33]
J. Williamson and R. Murray-Smith. 2005. Sonification of probabilistic feedback through granular synthesis. IEEE MultiMedia 12, 45--52.
[34]
J. O. Wobbrock, A. D. Wilson, and Y. Li. 2007. Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST’07). 159--168.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction  Volume 21, Issue 4
August 2014
141 pages
ISSN:1073-0516
EISSN:1557-7325
DOI:10.1145/2633907
Issue’s Table of Contents
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 August 2014
Accepted: 01 June 2014
Revised: 01 March 2014
Received: 01 October 2013
Published in TOCHI Volume 21, Issue 4

Check for updates

Author Tags

  1. Touch gestures
  2. dynamic Bayesian networks
  3. multitouch interaction
  4. probabilistic inference
  5. rapid prototyping
  6. uncertainty
  7. user interface programming paradigms

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)69
  • Downloads (Last 6 weeks)8
Reflects downloads up to 06 Nov 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media