skip to main content
10.1145/3441852.3471200acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article
Public Access

American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users

Published: 17 October 2021 Publication History

Abstract

Without a commonly accepted writing system for American Sign Language (ASL), Deaf or Hard of Hearing (DHH) ASL signers who wish to express opinions or ask questions online must post a video of their signing, if they prefer not to use written English, a language in which they may feel less proficient. Since the face conveys essential linguistic meaning, the face cannot simply be removed from the video in order to preserve anonymity. Thus, DHH ASL signers cannot easily discuss sensitive, personal, or controversial topics in their primary language, limiting engagement in online debate or inquiries about health or legal issues. We explored several recent attempts to address this problem through development of “face swap” technologies to automatically disguise the face in videos while preserving essential facial expressions and natural human appearance. We presented several prototypes to DHH ASL signers (N=16) and examined their interests in and requirements for such technology. After viewing transformed videos of other signers and of themselves, participants evaluated the understandability, naturalness of appearance, and degree of anonymity protection of these technologies. Our study revealed users’ perception of key trade-offs among these three dimensions, factors that contribute to each, and their views on transformation options enabled by this technology, for use in various contexts. Our findings guide future designers of this technology and inform selection of applications and design features.

Supplementary Material

VTT File (8711.vtt)
Supplemental materials (8711-file2.zip)
MP4 File (8711.mp4)
Presentation video

References

[1]
Nicoletta Adamo-Villani and Ronnie B. Wilbur. 2015. ASL-Pro: American Sign Language Animation with Prosodic Elements. In Universal Access in Human-Computer Interaction. Access to Interaction. Springer International Publishing, 307–318. https://doi.org/10.1007/978-3-319-20681-3_29
[2]
Robert W. Arnold. 2007. A proposal for a written system of American Sign Language.
[3]
Werner Bailer and Martin Winter. 2019. On Improving Face Generation for Privacy Preservation. In 2019 International Conference on Content-Based Multimedia Indexing (CBMI). 1–6. https://doi.org/10.1109/CBMI.2019.8877442
[4]
Charlotte Baker-Shenk. 1985. The Facial Behavior of Deaf Signers: Evidence of a Complex Language.American Annals of the Deaf 130, 4 (1985), 297–304.
[5]
Steven Barnett, Jonathan D. Klein, Robert Q. Pollard, Vincent Samar, Deirdre Schlehofer, Matthew Starr, Erika Sutter, Hongmei Yang, and Thomas A. Pearson. 2011. Community Participatory Research With Deaf Sign Language Users to Identify Health Inequities. American Journal of Public Health 101, 12 (2011). https://doi.org/10.2105/ajph.2011.300247
[6]
Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, and Meredith Ringel Morris. 2019. Sign Language Recognition, Generation, and Translation. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. ACM. https://doi.org/10.1145/3308561.3353774
[7]
Danielle Bragg, Oscar Koller, Naomi Caselli, and William Thies. 2020. Exploring Collection of Sign Language Datasets: Privacy, Participation, and Model Performance. The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (Oct 2020). https://doi.org/10.1145/3373625.3417024
[8]
Danielle Bragg, Raja Kushalnagar, and Richard Ladner. 2018. Designing an Animated Character System for American Sign Language. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. ACM. https://doi.org/10.1145/3234695.3236338
[9]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[10]
Anna Cavender, Richard E Ladner, and Eve A Riskin. 2006. MobileASL: Intelligibility of sign language video as constrained by mobile phone technology. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility. 71–78.
[11]
G.R. Coulter. 1979. American Sign Language Typology.
[12]
Xianghua Ding, Yanqi Jiang, Xiankang Qin, Yunan Chen, Wenqiang Zhang, and Lizhe Qi. 2019. Reading Face, Reading Health: Exploring Face Reading Technologies for Everyday Health. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300435
[13]
Eleni Efthimiou, Stavroula-Evita Fotinea, Theodore Goulas, and Panos Kakoulidis. 2015. User Friendly Interfaces for Sign Retrieval and Sign Synthesis. In Universal Access in Human-Computer Interaction. Access to Interaction. Springer International Publishing, 351–361. https://doi.org/10.1007/978-3-319-20681-3_33
[14]
Eleni Efthimiou, Stavroula-Evita Fotinea, Theodore Goulas, Anna Vacalopoulou, Kiki Vasilaki, and Athanasia-Lida Dimou. 2018. Sign Language technologies in view of Future Internet accessibility services. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference. 495–501.
[15]
Eleni Efthimiou, Stavroula-Evita Fotinea, Thomas Hanke, John Glauert, Richard Bowden, Annelies Braffort, Christophe Collet, Petros Maragos, and François Lefebvre-Albaret. 2012. The Dicta-Sign Wiki: Enabling Web Communication for the Deaf. In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 205–212. https://doi.org/10.1007/978-3-642-31534-3_32
[16]
Eleni Efthimiou, Stavroula-Evita Fotinea, Christian Vogler, Thomas Hanke, John Glauert, Richard Bowden, Annelies Braffort, Christophe Collet, Petros Maragos, and Jérémie Segouat. 2009. Sign Language Recognition, Generation, and Modelling: A Research Effort with Applications in Deaf Communication. In Universal Access in Human-Computer Interaction. Addressing Diversity. Springer Berlin Heidelberg, 21–30. https://doi.org/10.1007/978-3-642-02707-9_3
[17]
Facebook. 2021. Facebook Homepage. https://www.facebook.com/
[18]
Jianping Fan, Hangzai Luo, Mohand-Said Hacid, and Elisa Bertino. 2005. A novel approach for privacy-preserving video sharing. In Proceedings of the 14th ACM international conference on Information and knowledge management. 609–616.
[19]
Ralph Gross, Edoardo Airoldi, Bradley Malin, and Latanya Sweeney. 2005. Integrating utility into face de-identification. In International Workshop on Privacy Enhancing Technologies. Springer, 227–242.
[20]
Rakibul Hasan, Eman Hassan, Yifang Li, Kelly Caine, David J Crandall, Roberto Hoyle, and Apu Kapadia. 2018. Viewer experience of obscuring scene elements in photos to enhance privacy. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[21]
Alexis Heloir and Fabrizio Nunnari. 2015. Toward an intuitive sign language animation authoring system for the deaf. Universal Access in the Information Society 15, 4 (May 2015), 513–523. https://doi.org/10.1007/s10209-015-0409-0
[22]
M. Shamim Hossain and Ghulam Muhammad. 2015. Cloud-Assisted Speech and Face Recognition Framework for Health Monitoring. Mob. Netw. Appl. 20, 3 (June 2015), 391–399. https://doi.org/10.1007/s11036-015-0586-3
[23]
Matt Huenerfauth and Vicki Hanson. 2009. Sign language in the interface: access for deaf signers. Universal Access Handbook. NJ: Erlbaum 38 (2009).
[24]
Instagram. 2021. Instagram website. https://www.instagram.com/
[25]
Brielle Jaeke. 2016. Sephora boosts augmented reality shopping with real-time facial recognition. http://t.cn/EGjuPZK
[26]
Jeeliz. 2021. Jeeliz website. https://jeeliz.com/demos/faceFilter/demos/threejs/tiger/
[27]
Hernisa Kacorri and Matt Huenerfauth. 2016. Continuous profile models in asl syntactic facial expression synthesis. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2084–2093.
[28]
Pavel Korshunov and Sébastien Marcel. 2018. Deepfakes: a new threat to face recognition? assessment and detection. arXiv preprint arXiv:1812.08685(2018).
[29]
Yifang Li, Nishant Vishwamitra, Bart P Knijnenburg, Hongxin Hu, and Kelly Caine. 2017. Effectiveness and users’ experience of obfuscation as a privacy-enhancing technology for sharing photos. Proceedings of the ACM on Human-Computer Interaction 1, CSCW(2017), 1–24.
[30]
Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. 2015. The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods 47, 4 (Jan. 2015), 1122–1135. https://doi.org/10.3758/s13428-014-0532-5
[31]
Debbie S. Ma, Justin Kantner, and Bernd Wittenbrink. 2020. Chicago Face Database: Multiracial expansion. Behavior Research Methods (Oct. 2020). https://doi.org/10.3758/s13428-020-01482-5
[32]
Kelly Mack, Danielle Bragg, Meredith Ringel Morris, Maarten W. Bos, Isabelle Albi, and Andrés Monroy-Hernández. 2020. Social App Accessibility for Deaf Signers. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (Oct. 2020), 1–31. https://doi.org/10.1145/3415196
[33]
Sachit Mahajan, Ling-Jyh Chen, and Tzu-Chieh Tsai. 2017. SwapItUp: A Face Swap Application for Privacy Protection. In 2017 IEEE 31st International Conference on Advanced Information Networking and Applications (AINA). 46–50. https://doi.org/10.1109/AINA.2017.53
[34]
Ross E. Mitchell. 2005. How Many Deaf People Are There in the United States? Estimates From the Survey of Income and Program Participation. The Journal of Deaf Studies and Deaf Education 11, 1 (09 2005), 112–119. https://doi.org/10.1093/deafed/enj004
[35]
Carol Neidle, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee. 2000. In The Syntax of American Sign Language: Functional Categories and Hierarchical Structure.Cambridge, MA: MIT Press.
[36]
Carol Neidle and Augustine Opoku. 2020. A User’s Guide to the American Sign Language Linguistic Research Project (ASLLRP) Data Access Interface (DAI) 2 — Version 2.American Sign Language Linguistic Research Project Report No. 18, Boston University. http://www.bu.edu/asllrp/rpt18/asllrpr18.pdf
[37]
Carol Neidle and Augustine Opoku. 2021. Update on Linguistically Annotated ASL Video Data Available through the American Sign Language Linguistic Research Project (ASLLRP). American Sign Language Linguistic Research Project Report No. 19, Boston University. http://www.bu.edu/asllrp/rpt18/asllrpr18.pdf
[38]
Carol Neidle, Augustine Opoku, Gregory Dimitriadis, and Dimitris Metaxas. 2018. NEW Shared & Interconnected ASL Resources: SignStream® 3 Software; DAI 2 for Web Access to Linguistically Annotated Video Corpora; and a Sign Bank. In 8th Workshop on the Representation and Processing of Sign Languages: Involving the Language Community (Miyagawa, Japan) (LREC 2018). 147–154.
[39]
Don Newkirk. 1987. SignFont Handbook. (1987).
[40]
Yuval Nirkin, Iacopo Masi, Anh Tran Tuan, Tal Hassner, and Gerard Medioni. 2018. On face segmentation, face swapping, and face perception. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 98–105.
[41]
Tabitha C. Peck, Jessica J. Good, and Kimberly A. Bourne. 2020. Inducing and Mitigating Stereotype Threat Through Gendered Virtual Body-Swap Illusions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376419
[42]
Robert Q Pollard, Erika Sutter, and Catherine Cerulli. 2013. Intimate Partner Violence Reported by Two Samples of Deaf Adults via a Computerized American Sign Language Survey. Journal of Interpersonal Violence 29, 5 (2013), 948–965. https://doi.org/10.1177/0886260513505703
[43]
Andreas Rossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1–11.
[44]
Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2019. First Order Motion Model for Image Animation. In Conference on Neural Information Processing Systems (NeurIPS).
[45]
Aliaksandr Siarohin, Subhankar Roy, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. 2020. Motion Supervised co-part Segmentation. arXiv preprint (2020).
[46]
Snapchat. 2021. Snapchat website. https://www.snapchat.com/
[47]
William C Stokoe Jr. 2005. Sign language structure: An outline of the visual communication systems of the American deaf. Journal of deaf studies and deaf education 10, 1 (2005), 3–37.
[48]
Valerie Sutton. 1998. The Signwriting Literacy Project.Impact of Deafness on Cognition AERA Conference (1998).
[49]
Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. 2016. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2387–2395.
[50]
Yu Tian, Xi Peng, Long Zhao, Shaoting Zhang, and Dimitris N. Metaxas. 2018. CR-GAN: Learning Complete Representations for Multi-view Generation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18. International Joint Conferences on Artificial Intelligence Organization, 942–948. https://doi.org/10.24963/ijcai.2018/131
[51]
Yu Tian, Xi Peng, Long Zhao, Shaoting Zhang, and Dimitris N. Metaxas. 2018. CR-GAN: Learning Complete Representations for Multi-view Generation. International joint conference on artificial intelligence (IJCAI) (2018), 942–948.
[52]
TikTok. 2021. TikTok website. https://www.tiktok.com/
[53]
Carol Bloomquist Traxler. 2000. The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students. Journal of Deaf Studies and Deaf Education 5, 4 (Jan 2000), 337––348. https://doi.org/10.1093/deafed/5.4.337
[54]
Hitomi Tsujita and Jun Rekimoto. 2011. Smiling Makes Us Happier: Enhancing Positive Mood and Communication with Smile-Encouraging Digital Appliances. In Proceedings of the 13th International Conference on Ubiquitous Computing (Beijing, China) (UbiComp ’11). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/2030112.2030114
[55]
Clayton Valli and Ceil Lucas. 2000. Linguistics of American sign language: An introduction. Gallaudet University Press.
[56]
Vcom3D. 2015. Sign Smith Studio website. http://www.vcom3d.com/signsmith.php
[57]
Lezi Wang, Chongyang Bai, Maksim Bolonkin, Judee Burgoon, Norah Dunbar, V. S. Subrahmanian, and Dimitris N. Metaxas. 2019. Attention-based facial behavior analytics in social communication. 30th British Machine Vision Conference (BMVC’19).
[58]
Nick Yee and Jeremy Bailenson. 2007. The Proteus Effect: The Effect of Transformed Self-Representation on Behavior. Human Communication Research 33, 3 (July 2007), 271–290. https://doi.org/10.1111/j.1468-2958.2007.00299.x
[59]
YouTube. 2021. YouTube website website. https://www.youtube.com/
[60]
Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris N. Metaxas. 2020. Towards Image-to-Video Translation: A Structure-Aware Approach via Multi-stage Generative Adversarial Networks. International Journal of Computer Vision 128, 10-11 (April 2020), 2514–2533. https://doi.org/10.1007/s11263-020-01328-9
[61]
New Zoogle. 2021. Best Animal Face Changer Apps for Android. https://newzoogle.com/best-animal-face-changer-apps-android/

Cited By

View all

Index Terms

  1. American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Information & Contributors

          Information

          Published In

          cover image ACM Conferences
          ASSETS '21: Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility
          October 2021
          730 pages
          ISBN:9781450383066
          DOI:10.1145/3441852
          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Sponsors

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          Published: 17 October 2021

          Permissions

          Request permissions for this article.

          Check for updates

          Author Tags

          1. Anonymization
          2. Deaf and Hard of Hearing

          Qualifiers

          • Research-article
          • Research
          • Refereed limited

          Funding Sources

          Conference

          ASSETS '21
          Sponsor:

          Acceptance Rates

          ASSETS '21 Paper Acceptance Rate 36 of 134 submissions, 27%;
          Overall Acceptance Rate 436 of 1,556 submissions, 28%

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)348
          • Downloads (Last 6 weeks)28
          Reflects downloads up to 05 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all

          View Options

          View options

          PDF

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format.

          HTML Format

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media