skip to main content
research-article

SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics

Published: 30 June 2017 Publication History

Abstract

The small size of wearable devices limits the efficiency and scope of possible user interactions, as inputs are typically constrained to two dimensions: the touchscreen surface. We present SoundTrak, an active acoustic sensing technique that enables a user to interact with wearable devices in the surrounding 3D space by continuously tracking the finger position with high resolution. The user wears a ring with an embedded miniature speaker sending an acoustic signal at a specific frequency (e.g., 11 kHz), which is captured by an array of miniature, inexpensive microphones on the target wearable device. A novel algorithm is designed to localize the finger’s position in 3D space by extracting phase information from the received acoustic signals. We evaluated SoundTrak in a volume of space (20cm × 16cm × 11cm) around a smartwatch, and show an average accuracy of 1.3 cm. We report on results from a Fitts’ Law experiment with 10 participants as the evaluation of the real-time prototype. We also present a set of applications which are supported by this 3D input technique, and show the practical challenges that need to be addressed before widespread use.

Supplementary Material

zhang (zhang.zip)
Supplemental movie, appendix, image and software files for, SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics

References

[1]
Christoph Amma, Marcus Georgi, and Tanja Schultz. 2012. Airwriting: Hands-free mobile text input by spotting and continuous recognition of 3D-space handwriting with inertial sensors. In Wearable Computers (ISWC), 2012 16th International Symposium on. IEEE, 52--59.
[2]
Md Tanvir Islam Aumi, Sidhant Gupta, Mayank Goel, Eric Larson, and Shwetak Patel. 2013. DopLink: using the doppler effect for multi-device interaction. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. ACM, 583--586.
[3]
AE Brenner and P De Bruyne. 1970. A sonic pen: a digital stylus system. IEEE Trans. Comput. 19, 6 (1970), 546--548.
[4]
Liwei Chan, Yi-Ling Chen, Chi-Hao Hsieh, Rong-Hao Liang, and Bing-Yu Chen. 2015. CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring. In Proceedings of the 28th Annual ACM Symposium on User Interface Software 8 Technology (UIST ’15). ACM, New York, NY, USA, 549--556.
[5]
Ke-Yu Chen, Kent Lyons, Sean White, and Shwetak Patel. 2013. uTrack: 3D Input Using Two Magnetic Sensors. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST ’13). ACM, New York, NY, USA, 237--244.
[6]
Artem Dementyev and Joseph A. Paradiso. 2014. WristFlex: Low-power Gesture Input with Wrist-worn Pressure Sensors. In Proceedings of the 27th annual ACM symposium on User interface software and technology - UIST ‘14. Association for Computing Machinery (ACM).
[7]
EPOS. 2007. EPOS Ultrasonic Pen. (2007). http://tce.technion.ac.il/wp-content/uploads/sites/8/2015/04/Nathan-Altman.pdf {Online; accessed 11-February-2017}.
[8]
Sidhant Gupta, Daniel Morris, Shwetak Patel, and Desney Tan. 2012. Soundwave: using the doppler effect to sense gestures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1911--1914.
[9]
Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: appropriating the body as an input surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 453--462.
[10]
Cory Hekimian-Williams, Brandon Grant, Xiuwen Liu, Zhenghao Zhang, and Piyush Kumar. 2010. Accurate localization of RFID tags using phase difference. In RFID, 2010 IEEE International Conference on. IEEE, 89--96.
[11]
Wenchao Huang, Yan Xiong, Xiang-Yang Li, Hao Lin, Xufei Mao, Panlong Yang, and Yunhao Liu. 2014. Shake and walk: Acoustic direction finding and fine-grained indoor localization using smartphones. In IEEE INFOCOM 2014-IEEE Conference on Computer Communications. IEEE, 370--378.
[12]
Miguel C Junger and David Feit. 1986. Sound, structures, and their interaction. Vol. 225. MIT press Cambridge, MA.
[13]
Gierad Laput, Robert Xiao, Xiang ‘Anthony’ Chen, Scott E. Hudson, and Chris Harrison. 2014. Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors. In Proceedings of the 27th annual ACM symposium on User interface software and technology - UIST ‘14. Association for Computing Machinery (ACM).
[14]
Gierad Laput, Chouchang Yang, Robert Xiao, Alanson Sample, and Chris Harrison. 2015. EM-Sense: Touch Recognition of Uninstrumented, Electrical and Electromechanical Objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software 8 Technology (UIST ’15). ACM, New York, NY, USA, 157--166.
[15]
Jian Liu, Yan Wang, Gorkem Kar, Yingying Chen, Jie Yang, and Marco Gruteser. 2015. Snooping keystrokes with mm-level audio ranging on a single phone. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. ACM, 142--154.
[16]
Kent Lyons, David Nguyen, Daniel Ashbrook, and Sean White. 2012. Facet: A Multi-segment Wrist Worn System. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST ’12). ACM, New York, NY, USA, 123--130.
[17]
Kent Lyons, Thad Starner, Daniel Plaisted, James Fusia, Amanda Lyons, Aaron Drew, and E. W. Looney. 2004. Twiddler typing. In Proceedings of the 2004 conference on Human factors in computing systems - CHI ‘04. Association for Computing Machinery (ACM).
[18]
Yunfei Ma, Xiaonan Hui, and Edwin C Kan. 2016. 3d real-time indoor localization via broadband nonlinear backscatter in passive devices with centimeter precision. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. ACM, 216--229.
[19]
Wenguang Mao, Jian He, and Lili Qiu. 2016. CAT: high-precision acoustic motion tracking. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. ACM, 69--81.
[20]
Chirp Microsystems. 2013. Chirp. (2013). http://www.chirpmicro.com/technology.html {Online; accessed 11-February-2017}.
[21]
Adiyan Mujibiya, Xiang Cao, Desney S. Tan, Dan Morris, Shwetak N. Patel, and Jun Rekimoto. 2013. The Sound of Touch: On-body Touch and Gesture Sensing Based on Transdermal Ultrasound Propagation. In Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces (ITS ’13). ACM, New York, NY, USA, 189--198.
[22]
Rajalakshmi Nandakumar, Vikram Iyer, Desney Tan, and Shyamnath Gollakota. 2016. FingerIO: Using Active Sonar for Fine-Grained Finger Tracking. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1515--1525.
[23]
Rajalakshmi Nandakumar, Vikram Iyer, Desney Tan, and Shyamnath Gollakota. 2016. FingerIO: Using Active Sonar for Fine-Grained Finger Tracking. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 1515--1525.
[24]
Pavel V Nikitin, Rene Martinez, Shashi Ramamurthy, Hunter Leland, Gary Spiess, and KVS Rao. 2010. Phase based spatial identification of UHF RFID tags. In RFID, 2010 IEEE International Conference on. IEEE, 102--109.
[25]
Nissanka B Priyantha, Anit Chakraborty, and Hari Balakrishnan. 2000. The cricket location-support system. In Proceedings of the 6th annual international conference on Mobile computing and networking. ACM, 32--43.
[26]
Wenjie Ruan, Quan Z Sheng, Lei Yang, Tao Gu, Peipei Xu, and Longfei Shangguan. 2016. AudioGest: enabling fine-grained hand gesture detection by decoding echo signal. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 474--485.
[27]
T. Scott Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling Always-available Input with Muscle-computer Interfaces. In Proceedings of the 22Nd Annual ACM Symposium on User Interface Software and Technology (UIST ’09). ACM, New York, NY, USA, 167--176.
[28]
Mark J Stefik and J Courtenay Heater. 1989. Ultrasound position input device. (March 21 1989). US Patent 4,814,552.
[29]
Zheng Sun, Aveek Purohit, Raja Bose, and Pei Zhang. 2013. Spartacus: spatially-aware interaction for mobile devices through energy-efficient audio sensing. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services. ACM, 263--276.
[30]
Wei Wang, Alex X Liu, and Ke Sun. 2016. Device-free gesture tracking using acoustic signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking. ACM, 82--94.
[31]
Wikipedia. 2017. Inverse Square Law. (2017). https://en.wikipedia.org/wiki/Inverse-square_law {Online; accessed 11-February-2017}.
[32]
Haijun Xia, Tovi Grossman, and George Fitzmaurice. 2015. NanoStylus: Enhancing Input on Ultra-Small Displays with a Finger-Mounted Stylus. In Proceedings of the 28th Annual ACM Symposium on User Interface Software 8 Technology - UIST ‘15. Association for Computing Machinery (ACM).
[33]
Cheng Zhang, AbdelKareem Bedri, Gabriel Reyes, Bailey Bercik, Omer T. Inan, Thad E. Starner, and Gregory D. Abowd. 2016. TapSkin: Recognizing On-Skin Input for Smartwatches. In Proceedings of the 2016 ACM on Interactive Surfaces and Spaces (ISS ’16). ACM, New York, NY, USA, 13--22.
[34]
Cheng Zhang, Junrui Yang, Caleb Southern, Thad E. Starner, and Gregory D. Abowd. 2016. WatchOut: Extending Interactions on a Smartwatch with Inertial Sensing. In Proceedings of the 2016 ACM International Symposium on Wearable Computers (ISWC ’16). ACM, New York, NY, USA, 136--143.
[35]
Yang Zhang, Junhan Zhou, Gierad Laput, and Chris Harrison. 2016. SkinTrack: Using the Body As an Electrical Waveguide for Continuous Finger Tracking on the Skin. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1491--1503.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 2
June 2017
665 pages
EISSN:2474-9567
DOI:10.1145/3120957
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 June 2017
Accepted: 01 March 2017
Revised: 01 February 2017
Received: 01 November 2016
Published in IMWUT Volume 1, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. 3D input
  2. Acoustic
  3. Finger Tracking
  4. Wearable

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • Georgia Tech Wearable Computing Center Engagement

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)92
  • Downloads (Last 6 weeks)17
Reflects downloads up to 16 Oct 2024

Other Metrics

Citations

Cited By

View all

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media