skip to main content
research-article

EarCommand: "Hearing" Your Silent Speech Commands In Ear

Published: 07 July 2022 Publication History

Abstract

Intelligent speech interfaces have been developing vastly to support the growing demands for convenient control and interaction with wearable/earable and portable devices. To avoid privacy leakage during speech interactions and strengthen the resistance to ambient noise, silent speech interfaces have been widely explored to enable people's interaction with mobile/wearable devices without audible sounds. However, most existing silent speech solutions require either restricted background illuminations or hand involvement to hold device or perform gestures. In this study, we propose a novel earphone-based, hand-free silent speech interaction approach, named EarCommand. Our technique discovers the relationship between the deformation of the ear canal and the movements of the articulator and takes advantage of this link to recognize different silent speech commands. Our system can achieve a WER (word error rate) of 10.02% for word-level recognition and 12.33% for sentence-level recognition, when tested in human subjects with 32 word-level commands and 25 sentence-level commands, which indicates the effectiveness of inferring silent speech commands. Moreover, EarCommand shows high reliability and robustness in a variety of configuration settings and environmental conditions. It is anticipated that EarCommand can serve as an efficient, intelligent speech interface for hand-free operation, which could significantly improve the quality and convenience of interactions.

References

[1]
Takashi Amesaka, Hiroki Watanabe, and Masanori Sugimoto. 2019. Facial expression recognition using ear canal transfer function. In Proceedings of the 23rd International Symposium on Wearable Computers. 1--9.
[2]
Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in English and Mandarin. In Proceedings of the International Conference on Machine Learning (ICML'16). 173--182.
[3]
Miguel Angrick, Christian Herff, Garett D Johnson, Jerry J Shih, Dean J Krusienski, and Tanja Schultz. 2020. Speech Spectrogram Estimation from Intracranial Brain Activity Using a Quantization Approach. In Proceedings of the INTERSPEECH. 2777--2781.
[4]
Gopala K Anumanchipalli, Josh Chartier, and Edward F Chang. 2019. Speech synthesis from neural decoding of spoken sentences. Nature 568, 7753 (2019), 493--498.
[5]
Apple. 2020. Powerbeats. https://www.beatsbydre.com/earphones/powerbeats.
[6]
Yetong Cao, Huijie Chen, Fan Li, and Yu Wang. 2021. CanalScan: Tongue-Jaw Movement Recognition via Ear Canal Deformation Sensing. In Proceedings of the IEEE Conference on Computer Communications (INFOCOM). IEEE, 1--10.
[7]
William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 4960--4964.
[8]
Tuochao Chen, Benjamin Steeper, Kinan Alsheikh, Songyun Tao, François Guimbretière, and Cheng Zhang. 2020. C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-Mounted Miniature Cameras. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 112--125.
[9]
Yu-Chun Chen, Chia-Ying Liao, Shuo-wen Hsu, Da-Yuan Huang, and Bing-Yu Chen. 2020. Exploring User Defined Gestures for Ear-Based Interactions. Proceedings of the ACM on Human-Computer Interaction 4, ISS (2020), 1--20.
[10]
TMJ Relief Clinic. 2021. TMJ Relief Clinic - melbourne. https://tmjreliefclinic.com.au/tmj-dysfunction-or-tmd/.
[11]
Syllable Counter. 2021. Syllable Counter. https://syllablecounter.net/. [Online].
[12]
David Curry. 2021. Apple Statistics (2021). https://www.businessofapps.com/data/apple-statistics/.
[13]
Aidin Delnavaz and Jérémie Voix. 2013. Ear canal dynamic motion as a source of power for in-ear devices. Journal of Applied Physics 113, 6 (2013), 064701.
[14]
Aidin Delnavaz and Jérémie Voix. 2013. Energy harvesting for in-ear devices using ear canal dynamic motion. IEEE Transactions on Industrial Electronics 61, 1 (2013), 583--590.
[15]
Bruce Denby, Yacine Oussar, Gérard Dreyfus, and Maureen Stone. 2006. Prospects for a silent speech interface using ultrasound imaging. In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing Proceedings (ICASSP), Vol. 1. IEEE, I-I.
[16]
Bruce Denby, Tanja Schultz, Kiyoshi Honda, Thomas Hueber, Jim M Gilbert, and Jonathan S Brumberg. 2010. Silent speech interfaces. Speech Communication 52, 4 (2010), 270--287.
[17]
Echo-Buds. 2020. Amazon. https://www.amazon.com/All-new-Echo-Buds/dp/B085WV7635. [Online; accessed 30-August-2020].
[18]
Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Gaze interaction for smart watches using smooth pursuit eye movements. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology. 457--466.
[19]
Anthony S Fauci, H Clifford Lane, and Robert R Redfield. 2020. Covid-19---navigating the uncharted.
[20]
Daniel J. Fink. 2017. What Is a Safe Noise Level for the Public? American Journal of Public Health 107, 1 (2017), 44--45.
[21]
Yang Gao, Yincheng Jin, Seokmin Choi, Jiyang Li, Junjie Pan, Lin Shu, Chi Zhou, and Zhanpeng Jin. 2021. SonicFace: Tracking Facial Expressions Using a Commodity Microphone Array. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (2021), 1--33.
[22]
Yang Gao, Yincheng Jin, Jiyang Li, Seokmin Choi, and Zhanpeng Jin. 2020. EchoWhisper: Exploring an Acoustic-based Silent Speech Interface for Smartphone Users. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1--27.
[23]
Yang Gao, Wei Wang, Vir V Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using ear canal echo for wearable authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1--24.
[24]
Inc. Global Industry Analysts. 2021. Earphones and Headphones --- Global Market Trajectory & Analysis. https://www.researchandmarkets.com/reports/5302827/earphones-and-headphones-global-market.
[25]
Jose A Gonzalez, Lam A Cheah, James M Gilbert, Jie Bai, Stephen R Ell, Phil D Green, and Roger K Moore. 2016. A silent speech system based on permanent magnet articulography and direct synthesis. Computer Speech & Language 39 (2016), 67--87.
[26]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 770--778.
[27]
Tatsuya Hirahara, Makoto Otani, Shota Shimizu, Tomoki Toda, Keigo Nakamura, Yoshitaka Nakajima, and Kiyohiro Shikano. 2010. Silent-speech enhancement using body-conducted vocal-tract resonance signals. Speech Communication 52, 4 (2010), 301--313.
[28]
Hirotaka Hiraki and Jun Rekimoto. 2021. SilentMask: Mask-type Silent Speech Interface with Measurement of Mouth Movement. In Proceedings of the Augmented Humans Conference. 86--90.
[29]
Robin Hofe, Stephen R Ell, Michael J Fagan, James M Gilbert, Phil D Green, Roger K Moore, and Sergey I Rybchenko. 2013. Small-vocabulary speech recognition using a silent speech interface based on magnetic sensing. Speech Communication 55, 1 (2013), 22--32.
[30]
Xingyu Na Bengu Wu Hao Zheng Hui Bu, Jiayu Du. 2017. AIShell-1: An Open-Source Mandarin Speech Corpus and A Speech Recognition Baseline. In Proceedings of the 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA). 58--62.
[31]
Yasha Iravantchi, Yang Zhang, Evi Bernitsas, Mayank Goel, and Chris Harrison. 2019. Interferi: Gesture sensing using on-body acoustic interferometry. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--13.
[32]
Matthias Janke and Lorenz Diener. 2017. EMG-to-speech: Direct generation of speech from facial electromyographic signals. IEEE/ACM Transactions on Audio, Speech, and Language Processing 25, 12 (2017), 2375--2385.
[33]
Wenqiang Jin, Mingyan Xiao, Huadi Zhu, Shuchisnigdha Deb, Chen Kan, and Ming Li. 2020. Acoussist: An Acoustic Assisting Tool for People with Visual Impairments to Cross Uncontrolled Streets. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1--30.
[34]
Yincheng Jin, Yang Gao, Yanjun Zhu, Wei Wang, Jiyang Li, Seokmin Choi, Zhangyu Li, Jagmohan Chauhan, Anind K Dey, and Zhanpeng Jin. 2021. SonicASL: An Acoustic-based Sign Language Gesture Recognizer Using Earphones. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 2 (2021), 1--30.
[35]
Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. Alterego: A personalized wearable silent speech interface. In Proceedings of the 23rd International Conference on Intelligent User Interfaces. 43--53.
[36]
Priyabrata Karmakar, Shyh Wei Teng, and Guojun Lu. 2021. Thank you for Attention: A survey on Attention-based Artificial Neural Networks for Automatic Speech Recognition. arXiv preprint arXiv:2102.07259 (2021).
[37]
Jaspreet Kaur, Amitoj Singh, and Virender Kadyan. 2021. Automatic Speech Recognition System for Tonal Languages: State-of-the-Art Survey. Archives of Computational Methods in Engineering 28, 3 (2021).
[38]
Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018. eSense: Open Earable Platform for Human Sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems. 371--372.
[39]
Prerna Khanna, Tanmay Srivastava, Shijia Pan, Shubham Jain, and Phuc Nguyen. 2021. JawSense: recognizing unvoiced sound using a low-cost ear-worn system. In Proceedings of the 22nd International Workshop on Mobile Computing Systems and Applications. 44--49.
[40]
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, and Bruce H Thomas. 2017. EarTouch: turning the ear into an input surface. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI). 1--6. https://doi.org/10.1145/3098279.3098538
[41]
Naoki Kimura, Kentaro Hayashi, and Jun Rekimoto. 2020. TieLent: A Casual Neck-Mounted Mouth Capturing Device for Silent Speech Interaction. In Proceedings of the International Conference on Advanced Visual Interfaces. 1--8.
[42]
Naoki Kimura, Michinari Kono, and Jun Rekimoto. 2019. SottoVoce: An ultrasound imaging-based silent speech interaction using deep neural networks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--11.
[43]
Matias Laporte, Preety Baglat, Shkurta Gashi, Martin Gjoreski, Silvia Santini, and Marc Langheinrich. 2021. Detecting Verbal and Non-Verbal Gestures Using Earables. In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers. 165--170.
[44]
Brooks D Lindsey, Edward D Light, Heather A Nicoletto, Ellen R Bennett, Daniel T Laskowitz, and Stephen W Smith. 2011. The ultrasound brain helmet: New transducers and volume registration for in vivo simultaneous multi-transducer 3-D transcranial imaging. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 58, 6 (2011), 1189--1202.
[45]
Gustavo López, Luis Quesada, and Luis A Guerrero. 2017. Alexa vs. Siri vs. Cortana vs. Google Assistant: a comparison of speech-based natural user interfaces. In Proceedings of the International Conference on Applied Human Factors and Ergonomics. Springer, 241--250.
[46]
Denys JC Matthies, Bernhard A Strecker, and Bodo Urban. 2017. Earfieldsensing: A novel in-ear electric field sensing to enrich wearable gesture input through facial expressions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 1911--1922.
[47]
Geoffrey S. Meltzner, James T. Heaton, Yunbin Deng, Gianluca De Luca, Serge H. Roy, and Joshua C. Kline. 2017. Silent Speech Recognition as an Alternative Communication Device for Persons With Laryngectomy. IEEE/ACM Transactions on Audio, Speech, and Language Processing 25, 12 (2017), 2386--2398.
[48]
Christian Metzger, Matt Anderson, and Thad Starner. 2004. Freedigiter: A contact-free device for gesture control. In Proceedings of the 8th International Symposium on Wearable Computers, Vol. 1. IEEE, 18--21. https://doi.org/10.1109/ISWC.2004.23
[49]
Inc. MOUSER ELECTRONICS. 2021. stmsteval-bcn00. https://www.mouser.com/new/stmicroelectronics/stm-steval-bcn002v1b/.
[50]
Yoshitaka Nakajima, Hideki Kashioka, Kiyohiro Shikano, and Nick Campbell. 2003. Non-audible murmur recognition input interface using stethoscopic microphone attached to the skin. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)., Vol. 5. IEEE, V-708.
[51]
NUHEARA. 2020. Active Noise Cancellation.
[52]
Laxmi Pandey and Ahmed Sabbir Arif. 2021. LipType: A Silent Speech Recognizer Augmented with an Independent Repair Model. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--19.
[53]
Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. 2019. SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. Proceedings of the INTERSPEECH (Sep 2019).
[54]
Chester Pirzanski and Brenda Berge. 2005. Ear canal dynamics: Facts versus perception. The Hearing Journal 58, 10 (2005), 50--52.
[55]
Joseph Plazak and Marta Kersten-Oertel. 2018. A Survey on the Affordances of "Hearables". Inventions 3, 3 (2018), 1--16. https://doi.org/10.3390/inventions3030048
[56]
AirPods Pro. 2020. Apple. https://www.apple.com/airpods-pro/. [Online; accessed 30-August-2020].
[57]
pytorch. 2021. pytorch. https://pytorch.org/mobile/android/. [Online].
[58]
Tobias Röddiger, Christopher Clarke, Daniel Wolffram, Matthias Budde, and Michael Beigl. 2021. EarRumble: Discreet Hands-and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1--14.
[59]
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo. 2014. The tongue and ear interface: a wearable system for silent speech recognition. In Proceedings of the 2014 ACM International Symposium on Wearable Computers. 47--54.
[60]
Julia Schwarz, Chris Harrison, Scott Hudson, and Jennifer Mankoff. 2010. Cord input: an intuitive, high-accuracy, multi-degree-of-freedom input method for mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI). 1657--1660. https://doi.org/10.1145/1753326.1753573
[61]
Ben Shneiderman. 2000. The Limits of Speech Recognition. Commun. ACM 43, 9 (Sept. 2000), 63--65. https://doi.org/10.1145/348941.348990
[62]
Sony. 2020. WH-1000XM3 Wireless Noise-Cancelling Headphones Full Specifications and Features. https://www.sony.com/electronics/headband-headphones/wh- 1000xm3/specifications#features/.
[63]
Speech and Voice. 2020. Speech And Voice Recognition Market. https://www.marketsandmarkets.com/Market-Reports/speech-voice-recognition-market-202401714.html. [Online].
[64]
Speech and Voice. 2021. The best Siri commands for productivity, information, laughter, and more. https://www.androidauthority.com/best-siri-commands-1094484/. [Online].
[65]
Statistics. 2021. most downloaded apps 2021. https://mashable.com/article/most-downloaded-apps-2021. [Online].
[66]
Angus Stevenson. 2010. Oxford Dictionary of English. Oxford University Press, USA.
[67]
STMicroelectronics. 2021. STMicroelectronics App. https://github.com/STMicroelectronics/BlueSTSDK_Android/.
[68]
Ke Sun, Chun Yu, Weinan Shi, Lan Liu, and Yuanchun Shi. 2018. Lip-interact: Improving mobile device interaction with silent speech commands. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 581--593.
[69]
Wei Sun, Franklin Mingzhe Li, Benjamin Steeper, Songlin Xu, Feng Tian, and Cheng Zhang. 2021. TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece. In Proceedings of the 26th International Conference on Intelligent User Interfaces. 161--169.
[70]
Noeru Suzuki, Yuki Watanabe, and Atsushi Nakazawa. 2020. Gan-based style transformation to improve gesture-recognition accuracy. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1--20.
[71]
U.S. taxpayer expense. 2021. frequency names. https://www.ssa.gov/oact/babynames/decades. [Online].
[72]
Betty Tuller, Katherine S Harris, and JA Scott Kelso. 1981. Phase relationships among articulator muscles as a function of speaking rate and syllable stress. The Journal of the Acoustical Society of America 69, S1 (1981), S55-S55.
[73]
Andrew Varga and Herman JM Steeneken. 1993. Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recognition systems. Speech Communication 12, 3 (1993), 247--251.
[74]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems. 5998--6008.
[75]
Dhruv Verma, Sejal Bhalla, Dhruv Sahnan, Jainendra Shukla, and Aman Parnami. 2021. ExpressEar: Sensing Fine-Grained Facial Expressions with Earables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 3 (2021), 1--28.
[76]
Zi Wang, Sheng Tan, Linghan Zhang, Yili Ren, Zhi Wang, and Jie Yang. 2021. EarDynamic: An Ear Canal Deformation Based Continuous User Authentication Using In-Ear Wearables. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1--27.
[77]
Xomba.Com. 2021. frequent contacts. https://www.xomba.com/best-contact-names/. [Online].
[78]
Xuhai Xu, Haitian Shi, Xin Yi, Wenjia Liu, Yukang Yan, Yuanchun Shi, Alex Mariakakis, Jennifer Mankoff, and Anind K Dey. 2020. EarBuddy: Enabling On-Face Interaction via Wireless Earbuds. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI). 1--14. https://doi.org/10.1145/3313831.3376836
[79]
Sangki Yun, Yi-Chao Chen, Huihuang Zheng, Lili Qiu, and Wenguang Mao. 2017. Strata: Fine-grained acoustic-based device-free tracking. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. 15--28.
[80]
Qian Zhang, Dong Wang, Run Zhao, and Yinggang Yu. 2021. SoundLip: Enabling Word and Sentence-level Lip Interaction for Smart Devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 1 (2021), 1--28.
[81]
Yongzhao Zhang, Wei-Hsiang Huang, Chih-Yun Yang, Wen-Ping Wang, Yi-Chao Chen, Chuang-Wen You, Da-Yuan Huang, Guangtao Xue, and Jiadi Yu. 2020. Endophasia: Utilizing Acoustic-Based Imaging for Issuing Contact-Free Silent Speech Commands. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 1 (2020), 1--26.
[82]
Mingxing Zhu, Zhen Huang, Xiaochen Wang, Jiashuo Zhuang, Haoshi Zhang, Xin Wang, Zijian Yang, Lin Lu, Peng Shang, Guoru Zhao, et al. 2019. Contraction Patterns of Facial and Neck Muscles in Speaking Tasks Using High-Density Electromyography. In Proceedings of the 13th International Conference on Sensing Technology (ICST). IEEE, 1--5.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 6, Issue 2
June 2022
1551 pages
EISSN:2474-9567
DOI:10.1145/3547347
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 July 2022
Published in IMWUT Volume 6, Issue 2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Acoustic sensing
  2. Ear Canal Deformation
  3. Earphone
  4. Silent Speech

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)365
  • Downloads (Last 6 weeks)55
Reflects downloads up to 24 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)LR-Auth: Towards Practical Implementation of Implicit User Authentication on EarbudsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997938:4(1-27)Online publication date: 21-Nov-2024
  • (2024)HCR-Auth: Reliable Bone Conduction Earphone Authentication with Head Contact ResponseProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997808:4(1-27)Online publication date: 21-Nov-2024
  • (2024)Ring-a-Pose: A Ring for Continuous Hand Pose TrackingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36997418:4(1-30)Online publication date: 21-Nov-2024
  • (2024)Whispering Wearables: Multimodal Approach to Silent Speech Recognition with Head-Worn DevicesProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685720(214-223)Online publication date: 4-Nov-2024
  • (2024)EyeGesener: Eye Gesture Listener for Smart Glasses Interaction Using Acoustic SensingProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785418:3(1-28)Online publication date: 9-Sep-2024
  • (2024)EasyAsk: An In-App Contextual Tutorial Search Assistant for Older Adults with Voice and Touch InputsProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785168:3(1-27)Online publication date: 9-Sep-2024
  • (2024)Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUsProceedings of the ACM on Human-Computer Interaction10.1145/36765038:MHCI(1-23)Online publication date: 24-Sep-2024
  • (2024)How Proficiency and Feelings impact the Preference and Perception of Mobile Technology Support in Older AdultsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3688520(1-5)Online publication date: 27-Oct-2024
  • (2024)Development and Evaluation of the Mobile Tech Support Questionnaire for Older AdultsProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675661(1-18)Online publication date: 27-Oct-2024
  • (2024)EarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676367(1-13)Online publication date: 13-Oct-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media