skip to main content
10.1145/3613904.3642222acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

TADA: Making Node-link Diagrams Accessible to Blind and Low-Vision People

Published: 11 May 2024 Publication History

Abstract

Diagrams often appear as node-link representations in contexts such as taxonomies, mind maps and networks in textbooks. Despite their pervasiveness, they present accessibility challenges for blind and low-vision people. To address this challenge, we introduce Touch-and-Audio-based Diagram Access (TADA), a tablet-based interactive system that makes diagram exploration accessible through musical tones and speech. We designed TADA informed by an interview study with 15 participants who shared their challenges and strategies with diagrams. TADA enables people to access a diagram by: i) engaging in open-ended touch-based explorations, ii) searching for nodes, iii) navigating between nodes and iv) filtering information. We evaluated TADA with 25 participants and found it useful for gaining different perspectives on diagrammatic information.

Supplemental Material

MP4 File - Video Preview
Video Preview
Transcript for: Video Preview
MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
MP4 File - TADA: Video Figure
This video figure provides an overview of TADA and demonstrates its various interaction techniques.
Transcript for: TADA: Video Figure
ZIP File - Audio Samples from TADA's Five-finger Dome Interaction
The supplementary materials for TADA contain four audio files in WAV format. They are the audio samples from the Five-finger Dome interaction. The audio files are named as follows: - 'cluster.wav': audio sample of the dome containing a cluster. - 'ring.wav': audio sample of a ring structure where each node connects to two links. - 'no-connections.wav': audio sample of nodes without any connections. - 'hub-and-spokes': audio sample of a hub-and-spokes structure where one node in the center connects to all other nodes.

References

[1]
Anne Adams, Peter Lunt, and Paul Cairns. 2008. A qualitative approach to HCI research (1 ed.). Cambridge University Press, 138–157. https://doi.org/10.1017/CBO9780511814570.008
[2]
Md Zubair Ibne Alam, Shehnaz Islam, and Enamul Hoque. 2023. SeeChart: Enabling Accessible Visualizations Through Interactive Natural Language Interface For People with Visual Impairments. (Feb 2023). https://doi.org/10.1145/3581641.3584099 arXiv:2302.07742 [cs].
[3]
Jérémy Albouys-Perrois, Jérémy Laviole, Carine Briant, and Anke M. Brock. 2018. Towards a Multisensory Augmented Reality Map for Blind and Low Vision People: a Participatory Design Approach. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montreal QC Canada, 1–14. https://doi.org/10.1145/3173574.3174203
[4]
James L. Alty and Dimitrios Rigas. 2005. Exploring the use of structured musical stimuli to communicate simple diagrams: the role of context. International Journal of Human-Computer Studies 62, 1 (Jan 2005), 21–40. https://doi.org/10.1016/j.ijhcs.2004.08.003
[5]
Ronny Andrade, Jenny Waycott, Steven Baker, and Frank Vetere. 2021. Echolocation as a Means for People with Visual Impairment (PVI) to Acquire Spatial Knowledge of Virtual Space. ACM Transactions on Accessible Computing 14, 1 (Mar 2021), 1–25. https://doi.org/10.1145/3448273
[6]
Daniel Archambault and Helen C. Purchase. 2013. Mental Map Preservation Helps User Orientation in Dynamic Graphs. In Graph Drawing, Walter Didimo and Maurizio Patrignani (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 475–486.
[7]
American Anthropological Association. 2022. Guidelines for Creating Image Descriptions. Retrieved 2023 from https://americananthro.org/accessibility/image-descriptions/
[8]
Jack Atherton and Ge Wang. 2018. Chunity: Integrated Audiovisual Programming in Unity. In Proceedings of the 2018 conference on new interfaces for musical expression. 6.
[9]
Sina Bahram. 2013. Multimodal eyes-free exploration of maps: TIKISI for maps. ACM SIGACCESS Accessibility and Computing106 (Jun 2013), 3–11. https://doi.org/10.1145/2505401.2505402
[10]
Suzanne P. Balik, Sean P. Mealin, Matthias F. Stallmann, and Robert D. Rodman. 2013. GSK: universally accessible graph sketching. In Proceeding of the 44th ACM technical symposium on Computer science education. ACM, Denver Colorado USA, 221–226. https://doi.org/10.1145/2445196.2445266
[11]
Suzanne P. Balik, Sean P. Mealin, Matthias F. Stallmann, Robert D. Rodman, Michelle L. Glatz, and Veronica J. Sigler. 2014. Including blind people in computing through access to graphs. In Proceedings of the 16th international ACM SIGACCESS conference on Computers and accessibility(ASSETS ’14). Association for Computing Machinery, New York, NY, USA, 91–98. https://doi.org/10.1145/2661334.2661364
[12]
Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, and Rajesh P. N. Rao. 1997. Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences 20, 4 (1997), 723–742. https://doi.org/10.1017/S0140525X97001611
[13]
Michael Banf, Ruben Mikalay, Baris Watzke, and Volker Blanz. 2016. PictureSensation – a mobile application to help the blind explore the visual world through touch and sound. Journal of Rehabilitation and Assistive Technologies Engineering 3 (Oct 2016), 2055668316674582. https://doi.org/10.1177/2055668316674582 Citation Key: banf-PictureSensationMobileApplication-2016.
[14]
Adam Binks, Alice Toniolo, and Miguel A. Nacenta. 2022. Representational transformations: Using maps to write essays. International Journal of Human-Computer Studies 165 (2022), 102851. https://doi.org/10.1016/j.ijhcs.2022.102851
[15]
Paul Blenkhorn and D Gareth Evans. 1998. Using speech and touch to enable blind people to access schematic diagrams. Journal of Network and Computer Applications 21, 1 (1998), 17–29. https://doi.org/10.1006/jnca.1998.0060
[16]
Richard E Boyatzis. 1998. Transforming qualitative information: Thematic analysis and code development. sage.
[17]
Vincent Bragard, Thomas Pellegrini, and Julien Pinquier. 2015. Pyc2Sound: a Python tool to convert images into sound. 4 pages. https://doi.org/10.1145/2814895.2814912 Citation Key: bragard-Pyc2SoundPythonTool-2015.
[18]
Ulrik Brandes, Markus Eiglsperger, Jürgen Lerner, and Christian Pich. 2010. Graph Markup Language (GraphML). CRC Press, Taylor & Francis Group, Boca Raton.
[19]
Anke Brock. 2014. Interactive Maps for Visually Impaired People: Design, Usability and Spatial Cognition. (2014).
[20]
Anke Brock and Christophe Jouffrais. 2015. Interactive audio-tactile maps for visually impaired people. ACM SIGACCESS Accessibility and Computing113 (Nov 2015), 3–12. https://doi.org/10.1145/2850440.2850441
[21]
Anke Brock, Philippe Truillet, Bernard Oriola, and Christophe Jouffrais. 2014. Making Gestural Interaction Accessible to Visually Impaired People. In Haptics: Neuroscience, Devices, Modeling, and Applications(Lecture Notes in Computer Science), Malika Auvray and Christian Duriez (Eds.). Springer, Berlin, Heidelberg, 41–48. https://doi.org/10.1007/978-3-662-44196-1_6
[22]
Andy Brown, Steve Pettifer, and Robert Stevens. 2003. Evaluation of a non-visual molecule browser. In Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility(Assets ’04). Association for Computing Machinery, New York, NY, USA, 40–47. https://doi.org/10.1145/1028630.1028639
[23]
Emeline Brule, Gilles Bailly, Anke Brock, Frederic Valentin, Grégoire Denis, and Christophe Jouffrais. 2016. MapSense: Multi-Sensory Interactive Maps for Children Living with Visual Impairments. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose California USA, 445–457. https://doi.org/10.1145/2858036.2858375
[24]
Bill Buxton. 2010. Sketching user experiences: getting the design right and the right design. Morgan kaufmann.
[25]
Matt Calder, Robert F. Cohen, Jessica Lanzoni, and Yun Xu. 2006. PLUMB: an interface for users who are blind to display, create, and modify graphs. In Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility(Assets ’06). Association for Computing Machinery, New York, NY, USA, 263–264. https://doi.org/10.1145/1168987.1169046
[26]
DIAGRAM Center. 2019. Image Description Guidelines. Retrieved 2023 from http://diagramcenter.org/table-of-contents-2.html/
[27]
Elyse D. Z. Chase, Alexa Fay Siu, Abena Boadi-Agyemang, Gene S-H Kim, Eric J Gonzalez, and Sean Follmer. 2020. PantoGuide: A Haptic and Audio Guidance System To Support Tactile Graphics Exploration. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ACM, Virtual Event Greece, 1–4. https://doi.org/10.1145/3373625.3418023
[28]
Jinho Choi, Sanghun Jung, Deok Gun Park, Jaegul Choo, and Niklas Elmqvist. 2019. Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization. Computer Graphics Forum 38, 3 (2019), 249–260. https://doi.org/10.1111/cgf.13686 Citation Key: choi-VisualizingNonVisualEnabling-2019a.
[29]
Pramod Chundury, Biswaksen Patnaik, Yasmin Reyazuddin, Christine Tang, Jonathan Lazar, and Niklas Elmqvist. 2022. Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE Transactions on Visualization and Computer Graphics 28, 1 (Jan 2022), 1084–1094. https://doi.org/10.1109/TVCG.2021.3114829 Citation Key: chundury-UnderstandingSensorySubstitution-2022.
[30]
Pramod Chundury, Yasmin Reyazuddin, J. Bern Jordan, Jonathan Lazar, and Niklas Elmqvist. 2023. TactualPlot: Spatializing Data as Sound using Sensory Substitution for Touchscreen Accessibility. IEEE Transactions on Visualization and Computer Graphics (2023), 1–11. https://doi.org/10.1109/TVCG.2023.3326937
[31]
Robert F. Cohen, Valerie Haven, Jessica A. Lanzoni, Arthur Meacham, Joelle Skaff, and Michael Wissell. 2006. Using an audio interface to assist users Who are visually impaired with steering tasks. In Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility(Assets ’06). Association for Computing Machinery, New York, NY, USA, 119–124. https://doi.org/10.1145/1168987.1169008 Citation Key: cohen-UsingAudioInterface-2006b.
[32]
Robert F. Cohen, Arthur Meacham, and Joelle Skaff. 2006. Teaching Graphs to Visually Impaired Students Using an Active Auditory Interface. SIGCSE Bull. 38, 1 (mar 2006), 279–282. https://doi.org/10.1145/1124706.1121428
[33]
Robert F. Cohen, Rui Yu, Arthur Meacham, and Joelle Skaff. 2005. PLUMB: displaying graphs to the blind using an active auditory interface. In Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility(Assets ’05). Association for Computing Machinery, New York, NY, USA, 182–183. https://doi.org/10.1145/1090785.1090820 Citation Key: cohen-PLUMBDisplayingGraphs-2005.
[34]
Charlie Cross, Deniz Cetinkaya, and Huseyin Dogan. 2020. Transforming Diagrams’ Semantics to Text for Visually Impaired. In Design, User Experience, and Usability. Interaction Design(Lecture Notes in Computer Science), Aaron Marcus and Elizabeth Rosenzweig (Eds.). Springer InternaTheretional Publishing, Cham, 339–350. https://doi.org/10.1007/978-3-030-49713-2_24 Citation Key: cross-TransformingDiagramsSemantics-2020.
[35]
Alberto De Campo. 2007. Toward a data sonification design space map. In Proceedings of the International Conference on Auditory Display. 342–347.
[36]
Julie Ducasse, Anke M. Brock, and Christophe Jouffrais. 2018. Accessible Interactive Maps for Visually Impaired Users. Springer International Publishing, Cham, 537–584. https://doi.org/10.1007/978-3-319-54446-5_17
[37]
Julie Ducasse, Marc J-M Macé, Marcos Serrano, and Christophe Jouffrais. 2016. Tangible Reels: Construction and Exploration of Tangible Maps by Visually Impaired Users. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). Association for Computing Machinery, New York, NY, USA, 2186–2197. https://doi.org/10.1145/2858036.2858058 Citation Key: ducasse-TangibleReelsConstruction-2016.
[38]
Emory J. Edwards, Michael Gilbert, Emily Blank, and Stacy M. Branham. 2023. How the Alt Text Gets Made: What Roles and Processes of Alt Text Creation Can Teach Us About Inclusive Imagery. ACM Transactions on Accessible Computing (Mar 2023), 3587469. https://doi.org/10.1145/3587469
[39]
Be My Eyes. 2023. Be My Eyes - See the world together. Retrieved 2023 from https://www.bemyeyes.com/
[40]
Danyang Fan, Kate Glazko, and Sean Follmer. 2022. Accessibility of Linked-Node Diagrams on Collaborative Whiteboards for Screen Reader Users: Challenges and Opportunities. Springer International Publishing, Cham, 97–108. https://doi.org/10.1007/978-3-031-09297-8_6
[41]
Danyang Fan, Alexa Fay Siu, Wing-Sum Adrienne Law, Raymond Ruihong Zhen, Sile O’Modhrain, and Sean Follmer. 2022. Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–19. https://doi.org/10.1145/3491102.3517790 Citation Key: fan-SlideToneTiltTone1DOF-2022b.
[42]
The American Foundation for the Blind. 2023. Screen Readers. Retrieved 2023 from https://www.afb.org/blindness-and-low-vision/using-technology/assistive-technology-products/screen-readers
[43]
Giovanni Fusco and Valerie S. Morash. 2015. The Tactile Graphics Helper: Providing Audio Clarification for Tactile Graphics Using Machine Vision. In Proceedings of the 17th International ACM SIGACCESS Conference on Computers& Accessibility - ASSETS ’15. ACM Press, Lisbon, Portugal, 97–106. https://doi.org/10.1145/2700648.2809868
[44]
Nicholas A. Giudice, Hari Prasath Palani, Eric Brenner, and Kevin M. Kramer. 2012. Learning non-visual graphical information using a touch-based vibro-audio interface. In Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility. ACM, Boulder Colorado USA, 103–110. https://doi.org/10.1145/2384916.2384935
[45]
Cagatay Goncu and Kim Marriott. 2011. GraVVITAS: Generic Multi-touch Presentation of Accessible Graphics. Lecture Notes in Computer Science, Vol. 6946. Springer Berlin Heidelberg, Berlin, Heidelberg, 30–48. https://doi.org/10.1007/978-3-642-23774-4_5
[46]
Jenna L. Gorlewicz, Jennifer L. Tennison, P. Merlin Uesbeck, Margaret E. Richard, Hari P. Palani, Andreas Stefik, Derrick W. Smith, and Nicholas A. Giudice. 2020. Design Guidelines and Recommendations for Multimodal, Touchscreen-based Graphics. ACM Transactions on Accessible Computing 13, 3 (Sep 2020), 1–30. https://doi.org/10.1145/3403933
[47]
Brian Hansen, Leya Breanna Baltaxe-Admony, Sri Kurniawan, and Angus G. Forbes. 2019. Exploring Sonic Parameter Mapping for Network Data Structures. In Proceedings of the 25th International Conference on Auditory Display (ICAD 2019). Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, 67–74. https://doi.org/10.21785/icad2019.055 Citation Key: hansen-ExploringSonicParameter-2019.
[48]
Sandra G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9 (Oct 2006), 904–908. https://doi.org/10.1177/154193120605000909
[49]
Sandra G. Hart and Lowell E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Human Mental Workload, Vol. 52. North-Holland, 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
[50]
Liang He, Zijian Wan, Leah Findlater, and Jon E. Froehlich. 2017. TacTILE: A Preliminary Toolchain for Creating Accessible Graphics with 3D-Printed Overlays and Auditory Annotations. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, Baltimore Maryland USA, 397–398. https://doi.org/10.1145/3132525.3134818 Citation Key: he-TacTILEPreliminaryToolchain-2017.
[51]
Marti A Hearst. 2023. Show It or Tell It? Text, Visualization, and Their Combination. (2023).
[52]
Uta Hinrichs and Sheelagh Carpendale. 2011. Gestures in the Wild: Studying Multi-Touch Gesture Sequences on Interactive Tabletop Exhibits. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI ’11). Association for Computing Machinery, New York, NY, USA, 3023–3032. https://doi.org/10.1145/1978942.1979391
[53]
Eve Hoggan, John Williamson, Antti Oulasvirta, Miguel Nacenta, Per Ola Kristensson, and Anu Lehtiö. 2013. Multi-Touch Rotation Gestures: Performance and Ergonomics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Paris, France) (CHI ’13). Association for Computing Machinery, New York, NY, USA, 3047–3050. https://doi.org/10.1145/2470654.2481423
[54]
Leona Holloway, Swamy Ananthanarayan, Matthew Butler, Madhuka Thisuri De Silva, Kirsten Ellis, Cagatay Goncu, Kate Stephens, and Kim Marriott. 2022. Animations at Your Fingertips: Using a Refreshable Tactile Display to Convey Motion Graphics for People who are Blind or have Low Vision. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM, Athens Greece, 1–16. https://doi.org/10.1145/3517428.3544797
[55]
Leona M Holloway, Cagatay Goncu, Alon Ilsar, Matthew Butler, and Kim Marriott. 2022. Infosonics: Accessible Infographics for People who are Blind using Sonification and Voice. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1–13. https://doi.org/10.1145/3491102.3517465 Citation Key: holloway-InfosonicsAccessibleInfographics-2022.
[56]
Anett Hoppe, David Morris, and Ralph Ewerth. 2021. Evaluation of Automated Image Descriptions for Visually Impaired Students. In Artificial Intelligence in Education(Lecture Notes in Computer Science), Ido Roll, Danielle McNamara, Sergey Sosnovsky, Rose Luckin, and Vania Dimitrova (Eds.). Springer International Publishing, Cham, 196–201. https://doi.org/10.1007/978-3-030-78270-2_35 Citation Key: hoppe-EvaluationAutomatedImage-2021.
[57]
Mirko Horstmann*, Martin Lorenz, A Watkowski, G Ioannidis, Otthein Herzog, Alasdair King, D Gareth Evans, Cornelius Hagen, Christoph Schlieder, A-M Burn, 2004. Automated interpretation and accessible presentation of technical diagrams for blind people. New Review of Hypermedia and Multimedia 10, 2 (2004), 141–163. https://doi.org/10.1080/13614560512331326017
[58]
W3C Web Accessibility Initiative (WAI). 2017. Tips and Tricks. Retrieved 2023 from https://www.w3.org/WAI/tutorials/images/tips/
[59]
Mihail Ivanchev, Francis Zinke, and Ulrike Lucke. 2014. Pre-journey Visualization of Travel Routes for the Blind on Refreshable Interactive Tactile Displays. Lecture Notes in Computer Science, Vol. 8548. Springer International Publishing, Cham, 81–88. https://doi.org/10.1007/978-3-319-08599-9_13
[60]
j1mmyto9. 2022. Speed to text in Unity iOS use Native Speech Recognition. Retrieved 2023 from https://github.com/j1mmyto9/speech-and-text-unity-ios-android
[61]
Crescentia Jung, Shubham Mehta, Atharva Kulkarni, Yuhang Zhao, and Yea-Seul Kim. 2022. Communicating Visualizations without Visuals: Investigation of Visualization Alternative Text for People with Visual Impairments. IEEE Transactions on Visualization and Computer Graphics 28, 1 (Jan 2022), 1095–1105. https://doi.org/10.1109/TVCG.2021.3114846 Citation Key: jung-CommunicatingVisualizationsVisuals-2022.
[62]
Nikolaos Kaklanis, Konstantinos Votis, and Dimitrios Tzovaras. 2013. A mobile interactive maps application for a visually impaired audience. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility(W4A ’13). Association for Computing Machinery, New York, NY, USA, 1–2. https://doi.org/10.1145/2461121.2461152
[63]
Shaun K. Kane, Jeffrey P. Bigham, and Jacob O. Wobbrock. 2008. Slide rule: making mobile touch screens accessible to blind people using multi-touch interaction techniques. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility - Assets ’08. ACM Press, Halifax, Nova Scotia, Canada, 73. https://doi.org/10.1145/1414471.1414487
[64]
Shaun K. Kane, Meredith Ringel Morris, Annuska Z. Perkins, Daniel Wigdor, Richard E. Ladner, and Jacob O. Wobbrock. 2011. Access overlays: improving non-visual access to large touch screens for blind users. In Proceedings of the 24th annual ACM symposium on User interface software and technology(UIST ’11). Association for Computing Machinery, New York, NY, USA, 273–282. https://doi.org/10.1145/2047196.2047232
[65]
Shaun K. Kane, Jacob O. Wobbrock, and Richard E. Ladner. 2011. Usable gestures for blind people: understanding preference and performance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Vancouver BC Canada, 413–422. https://doi.org/10.1145/1978942.1979001
[66]
Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. 2016. A Diagram is Worth a Dozen Images. In Computer Vision – ECCV 2016(Lecture Notes in Computer Science), Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (Eds.). Springer International Publishing, Cham, 235–251. https://doi.org/10.1007/978-3-319-46493-0_15 Citation Key: kembhavi-DiagramWorthDozen-2016.
[67]
Andrea R. Kennel. 1996. Audiograf: a diagram-reader for the blind. In Proceedings of the second annual ACM conference on Assistive technologies(Assets ’96). Association for Computing Machinery, New York, NY, USA, 51–56. https://doi.org/10.1145/228347.228357
[68]
Daesik Kim, YoungJoon Yoo, Jeesoo Kim, Sangkuk Lee, and Nojun Kwak. 2018. Dynamic Graph Generation Network: Generating Relational Knowledge from Diagrams. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, Salt Lake City, UT, 4167–4175. https://doi.org/10.1109/CVPR.2018.00438 Citation Key: kim-DynamicGraphGeneration-2018.
[69]
Gyeongri Kim, Jiho Kim, and Yea-Seul Kim. 2023. “Explain What a Treemap is”: Exploratory Investigation of Strategies for Explaining Unfamiliar Chart to Blind and Low Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–13. https://doi.org/10.1145/3544548.3581139
[70]
Jiho Kim, Arjun Srinivasan, Nam Wook Kim, and Yea-Seul Kim. 2023. Exploring Chart Question Answering for Blind and Low Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3544548.3581532
[71]
Alasdair King, Paul Blenkhorn, David Crombie, Sijo Dijkstra, Gareth Evans, and John Wood. 2004. Presenting UML Software Engineering Diagrams to Blind People. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Joachim Klaus, Wolfgang L. Zagler, and Dominique Burger (Eds.). Springer, Berlin, Heidelberg, 522–529. https://doi.org/10.1007/978-3-540-27817-7_76
[72]
Stephen M. Kosslyn. 1989. Understanding charts and graphs. Applied Cognitive Psychology 3, 3 (1989), 185–225. https://doi.org/10.1002/acp.2350030302
[73]
Jill H. Larkin and Herbert A. Simon. 1987. Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cognitive Science 11, 1 (1987), 65–100. https://doi.org/10.1111/j.1551-6708.1987.tb00863.x
[74]
Jonathan Lazar, Daniel F Goldstein, and Anne Taylor. 2015. Ensuring digital accessibility through process and policy. Morgan kaufmann.
[75]
David Ledo, Miguel A. Nacenta, Nicolai Marquardt, Sebastian Boring, and Saul Greenberg. 2012. The HapticTouch toolkit: enabling exploration of haptic interactions. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction(TEI ’12). Association for Computing Machinery, New York, NY, USA, 115–122. https://doi.org/10.1145/2148131.2148157
[76]
Bongshin Lee, Catherine Plaisant, Cynthia Sims Parr, Jean-Daniel Fekete, and Nathalie Henry. 2006. Task taxonomy for graph visualization. In Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization(BELIV ’06). Association for Computing Machinery, New York, NY, USA, 1–5. https://doi.org/10.1145/1168149.1168168
[77]
Nicolai Marquardt, Miguel A. Nacenta, James E. Young, Sheelagh Carpendale, Saul Greenberg, and Ehud Sharlin. 2009. The Haptic Tabletop Puck: tactile feedback for interactive tabletops. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces(ITS ’09). Association for Computing Machinery, New York, NY, USA, 85–92. https://doi.org/10.1145/1731903.1731922
[78]
Richard E. Mayer. 2002. Multimedia learning. Psychology of Learning and Motivation, Vol. 41. Academic Press, 85–139. https://doi.org/10.1016/S0079-7421(02)80005-6
[79]
Michał Maćkowski, Piotr Brzoza, Mateusz Kawulok, Rafał Meisel, and Dominik Spinczyk. 2023. Multimodal Presentation of Interactive Audio-Tactile Graphics Supporting the Perception of Visual Information by Blind People. ACM Transactions on Multimedia Computing, Communications, and Applications (Mar 2023), 3586076. https://doi.org/10.1145/3586076 Citation Key: mackowski-MultimodalPresentationInteractive-2023.
[80]
David Mcgookin, Stephen Brewster, and WeiWei Jiang. 2008. Investigating Touchscreen Accessibility for People with Visual Impairments, Vol. 358. https://doi.org/10.1145/1463160.1463193
[81]
Daniel R. Montello. 2015. Spatial Cognition. Elsevier, 111–115. https://doi.org/10.1016/B978-0-08-097086-8.72057-5
[82]
Tamara Munzner. 2014. Visualization Analysis and Design (0 ed.). A K Peters/CRC Press. https://doi.org/10.1201/b17511
[83]
Vishnu Nair, Hanxiu “Hazel” Zhu, and Brian A. Smith. 2023. ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–17. https://doi.org/10.1145/3544548.3581302 arXiv:2302.09124 [cs].
[84]
NASA and the Harvard-Smithsonian Center for Astrophysics. 2020. Data sonifcation: Sounds from around the milky way. https://chandra.si.edu/photo/2020/sonify/
[85]
OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 (Mar 2023). https://doi.org/10.48550/arXiv.2303.08774 arXiv:2303.08774 [cs].
[86]
Don Parkes. 1988. NOMAD”: An audio-tactile tool for the acquisition, use and management of spatially distributed information by partially sighted and blind persons. In Proceedings of Second International Conference on Maps and Graphics for Visually Disabled People. 24–29.
[87]
Helen Petrie, Neil King, Anne-Marie Burn, and Peter Pavan. 2006. Providing interactive access to architectural floorplans for blind people. British Journal of Visual Impairment 24, 1 (Jan 2006), 4–11. https://doi.org/10.1177/0264619606060010
[88]
Helen Petrie, Christoph Schlieder, Paul Blenkhorn, Gareth Evans, Alasdair King, Anne-Marie O’Neill, George T. Ioannidis, Blaithin Gallagher, David Crombie, Rolf Mager, and Maurizio Alafaci. 2002. TeDUB: A System for Presenting and Exploring Technical Drawings for Blind People. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Joachim Klaus, and Wolfgang Zagler (Eds.). Springer, Berlin, Heidelberg, 537–539. https://doi.org/10.1007/3-540-45491-8_102
[89]
Benjamin Poppinga, Charlotte Magnusson, Martin Pielot, and Kirsten Rassmus-Gröhn. 2011. TouchOver map: audio-tactile exploration of interactive maps. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services. ACM, Stockholm Sweden, 545–550. https://doi.org/10.1145/2037373.2037458
[90]
Gaspar Ramôa, Vincent Schmidt, and Peter König. 2022. Developing Dynamic Audio Navigation UIs to Pinpoint Elements in Tactile Graphics. Multimodal Technologies and Interaction 6, 12 (Dec 2022), 113. https://doi.org/10.3390/mti6120113 Citation Key: ramoa-DevelopingDynamicAudio-2022.
[91]
Ravi Rastogi and Dianne T. V. Pawluk. 2013. Dynamic Tactile Diagram Simplification on Refreshable Displays. Assistive Technology 25, 1 (Mar 2013), 31–38. https://doi.org/10.1080/10400435.2012.685567
[92]
Mike Scaife and Yvonne Rogers. 1996. External cognition: how do graphical representations work?International journal of human-computer studies 45, 2 (1996), 185–213.
[93]
Bernhard Schmitz and Thomas Ertl. 2012. Interactively Displaying Maps on a Tactile Graphics Display. (2012).
[94]
Ather Sharif, Olivia H. Wang, Alida T. Muongchan, Katharina Reinecke, and Jacob O. Wobbrock. 2022. VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). Association for Computing Machinery, New York, NY, USA, 1–19. https://doi.org/10.1145/3491102.3517431
[95]
Shigenobu Shimada, Haruka Murase, Suguru Yamamoto, Yusuke Uchida, Makoto Shimojo, and Yutaka Shimizu. 2010. Development of Directly Manipulable Tactile Graphic System with Audio Support Function. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Joachim Klaus, Wolfgang Zagler, and Arthur Karshmer (Eds.). Springer, Berlin, Heidelberg, 451–458. https://doi.org/10.1007/978-3-642-14100-3_68
[96]
Mathieu Simonnet, Cécile Bothorel, Luiz Felipe Maximiano, and André Thépaut. 2012. GeoTablet, une application cartographique pour les personnes déficientes visuelles.
[97]
Robert Spence. 2014. Information Visualization: An Introduction. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-07341-5
[98]
Jing Su, Alyssa Rosenzweig, Ashvin Goel, Eyal De Lara, and Khai N. Truong. 2010. Timbremap: enabling the visually-impaired to use maps on touch-enabled devices. In Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. ACM, Lisbon Portugal, 17–26. https://doi.org/10.1145/1851600.1851606
[99]
Saiganesh Swaminathan, Thijs Roumen, Robert Kovacs, David Stangl, Stefanie Mueller, and Patrick Baudisch. 2016. Linespace: A Sensemaking Platform for the Blind. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose California USA, 2175–2185. https://doi.org/10.1145/2858036.2858245
[100]
Unity Technologies. 2023. Maximize Multiplatform Game Development | Unity. Retrieved 2023 from https://unity.com/solutions/multiplatform
[101]
Jennifer L. Tennison and Jenna L. Gorlewicz. 2019. Non-visual Perception of Lines on a Multimodal Touchscreen Tablet. ACM Transactions on Applied Perception 16, 1 (Jan 2019), 1–19. https://doi.org/10.1145/3301415
[102]
Lore Thaler and Melvyn A Goodale. 2016. Echolocation in humans: an overview. Wiley Interdisciplinary Reviews: Cognitive Science 7, 6 (2016), 382–393.
[103]
John R Thompson, Jesse J Martinez, Alper Sarikaya, Edward Cutrell, and Bongshin Lee. 2023. Chart Reader: Accessible Visualization Experiences Designed with Screen Reader Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems(CHI ’23). Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3544548.3581186
[104]
Márcio Josué Ramos Torres and Regina Barwaldt. 2019. Approaches for diagrams accessibility for blind people: a systematic review. In 2019 IEEE Frontiers in Education Conference (FIE). 1–7. https://doi.org/10.1109/FIE43999.2019.9028522 Citation Key: torres-ApproachesDiagramsAccessibility-2019.
[105]
Unity Technologies. 2023. Unity Real-Time Development Platform. Retrieved 2023 from https://unity.com/
[106]
Shin’ichiro Uno, Yasuo Suzuki, Takashi Watanabe, Miku Matsumoto, and Yan Wang. 2018. Soundbased Image and Position Recognition System SIPReS. In Proceedings of the 24th International Conference on Auditory Display - ICAD 2018. The International Community for Auditory Display, Houghton, Michigan, 67–72. https://doi.org/10.21785/icad2018.005 Citation Key: uno-SoundbasedImagePosition-2018.
[107]
Fernando Vidal-Verdu and Moustapha Hafez. 2007. Graphical Tactile Displays for Visually-Impaired People. IEEE Transactions on Neural Systems and Rehabilitation Engineering 15, 1 (Mar 2007), 119–130. https://doi.org/10.1109/TNSRE.2007.891375
[108]
Ge Wang and Perry R Cook. 2003. ChucK: A Concurrent, On-the-fly, Audio Programming Language. In Proceedings of the 2003 International Computer Music Conference. 8.
[109]
Colin Ware. 2013. Information Visualization: Perception for Design. Elsevier.
[110]
Robin Williams. 2015. The non-designer’s design book: Design and typographic principles for the visual novice. Pearson Education.
[111]
Ikuko Eguchi Yairi, Masamitsu Takano, Motoki Shino, and Minoru Kamata. 2008. Expression of paths and buildings for universal designed interactive map with due consideration for visually impaired people. In 2008 IEEE International Conference on Systems, Man and Cybernetics. 524–529. https://doi.org/10.1109/ICSMC.2008.4811330
[112]
Naohide Yamamoto and Amy L. Shelton. 2005. Visual and proprioceptive representations in spatial memory. Memory & Cognition 33, 1 (Jan 2005), 140–150. https://doi.org/10.3758/BF03195304
[113]
Yalong Yang, Kim Marriott, Matthew Butler, Cagatay Goncu, and Leona Holloway. 2020. Tactile Presentation of Network Data: Text, Matrix or Diagram?. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376367 Citation Key: yang-TactilePresentationNetwork-2020.
[114]
Limin Zeng and Gerhard Weber. 2010. Audio-Haptic Browser for a Geographical Information System. In Computers Helping People with Special Needs(Lecture Notes in Computer Science), Klaus Miesenberger, Joachim Klaus, Wolfgang Zagler, and Arthur Karshmer (Eds.). Springer, Berlin, Heidelberg, 466–473. https://doi.org/10.1007/978-3-642-14100-3_70
[115]
Limin Zeng and Gerhard Weber. 2012. ATMap: Annotated Tactile Maps for the Visually Impaired. In Cognitive Behavioural Systems(Lecture Notes in Computer Science), Anna Esposito, Antonietta M. Esposito, Alessandro Vinciarelli, Rüdiger Hoffmann, and Vincent C. Müller (Eds.). Springer, Berlin, Heidelberg, 290–298. https://doi.org/10.1007/978-3-642-34584-5_25
[116]
Zhuohao Zhang and Jacob O. Wobbrock. 2022. A11yBoard: Using Multimodal Input and Output to Make Digital Artboards Accessible to Blind Users. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology(UIST ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3526114.3558695
[117]
Zhuohao (Jerry) Zhang and Jacob O. Wobbrock. 2023. A11yBoard: Making Digital Artboards Accessible to Blind and Low-Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, Hamburg Germany, 1–17. https://doi.org/10.1145/3544548.3580655
[118]
Jonathan Zong, Crystal Lee, Alan Lundgard, JiWoong Jang, Daniel Hajas, and Arvind Satyanarayan. 2022. Rich Screen Reader Experiences for Accessible Data Visualization. Computer Graphics Forum 41, 3 (2022), 15–27. https://doi.org/10.1111/cgf.14519

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Accessibility
  2. Artifact or System
  3. Assistive Technologies
  4. Gestures
  5. Haptics
  6. Individuals with Disabilities
  7. Pointing
  8. Touch

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)815
  • Downloads (Last 6 weeks)131
Reflects downloads up to 28 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media