skip to main content
10.1145/3382507.3418884acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations

Published: 22 October 2020 Publication History

Abstract

We leverage eye-tracking data to predict user performance and levels of cognitive abilities while reading magazine-style narrative visualizations (MSNV), a widespread form of multimodal documents that combine text and visualizations. Such predictions are motivated by recent interest in devising user-adaptive MSNVs that can dynamically adapt to a user's needs. Our results provide evidence for the feasibility of real-time user modeling in MSNV, as we are the first to consider eye tracking data for predicting task comprehension and cognitive abilities while processing multimodal documents. We follow with a discussion on the implications to the design of personalized MSNVs.

Supplementary Material

MP4 File (3382507.3418884.mp4)
Magazine-style narrative visualizations or MSNVs, are a very common type of information visualization that combine both text and graphic components. Processing MSNVs can be challenging due to the need to split attention between the two information sources, which can lead to an increase in cognitive load and negative impact on comprehension. One way to address user's challenges in comprehension is to provide adaptations that facilitate processing of the visualization. However, recent work has shown that adaptations are useful only for users with specific levels of cognitive abilities, and if these adaptations are provided when unwanted they can be intrusive and distracting.\r\nIn this presentation we present our approach at using eye tracking data to predict task performance measures as well as user's cognitive abilities, which would help us identify when and to whom provide these adaptations, informing the design of personalized-user adaptive MSNVs.\r\n

References

[1]
Stephen Akuma, Chrisina Jayne, Rahat Iqbal, and Faiyaz Doctor. 2015. Inferring Users? Interest on Web Documents Through Their Implicit Behaviour. In Proceedings of the International Conference on Engineering Applications of Neural Networks. Springer, Rhodes, Greece, 315--324.
[2]
Florian Alt, Alireza Sahami Shirazi, Albrecht Schmidt, and Julian Mennenöh. 2012. Increasing the user's attention on the web: using implicit interaction based on gaze behavior to tailor content. In Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design. ACM, 544--553.
[3]
Tobias Appel, Natalia Sevcenko, Franz Wortha, Katerina Tsarava, Korbinian Moeller, Manuel Ninaus, Enkelejda Kasneci, and Peter Gerjets. 2019. Predicting Cognitive Load in an Emergency Simulation Based on Behavioral and Physiological Measures. In 2019 International Conference on Multimodal Interaction (ICMI '19). Association for Computing Machinery, Suzhou, China, 154--163.
[4]
Paul Ayres and Gabriele Cierniak. 2012. Split-attention effect. In Encyclopedia of the Sciences of Learning. Springer, 3172--3175.
[5]
Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull. 91, 2 (1982), 276--292.
[6]
Kenan Bektas, Arzu Cöltekin, Jens Krüger, and Andrew T. Duchowski. 2015. A testbed combining visual perception models for geographic gaze contingent displays. In Eurographics Conference on Visualization (EuroVis)-Short Papers.
[7]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B Methodol. 57, 1 (1995), 289--300.
[8]
Shlomo Berkovsky, Ronnie Taib, Irena Koprinska, Eileen Wang, Yucheng Zeng, Jingjie Li, and Sabina Kleitman. 2019. Detecting Personality Traits Using Eye-Tracking Data. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI'19). Association for Computing Machinery, Glasgow, Scotland UK, 1--12.
[9]
Daria Bondareva, Cristina Conati, Reza Feyzi-Behnagh, Jason M. Harley, Roger Azevedo, and François Bouchet. 2013. Inferring Learning from Gaze Data during Interaction with an Environment to Support Self-Regulated Learning. In Proceedings of the 16th International Conference on Artificial Intelligence in Education. Springer, Memphis, TN, USA, 229--238.
[10]
J. Boy, R.A. Rensink, E. Bertini, and J.-D. Fekete. 2014. A Principled Way of Assessing Visualization Literacy. IEEE Trans. Vis. Comput. Graph. 20, 12 (December 2014), 1963--1972.
[11]
Eli T. Brown, Alvitta Ottley, Hang Zhao, Quan Lin, Richard Souvenir, Alex Endert, and Ronald Chang. 2014. Finding waldo: Learning about users from their interactions. IEEE Trans. Vis. Comput. Graph. 20, 12 (2014), 1663--1672.
[12]
Georg Buscher, Andreas Dengel, and Ludger van Elst. 2008. Eye movements as implicit relevance feedback. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (CHI EA '08). Association for Computing Machinery, Florence, Italy, 2991--2996.
[13]
Thomas H. Carr. 1981. Building theories of reading ability: On the relation between individual differences in cognitive skills and reading comprehension. Cognition 9, 1 (January 1981), 73--114.
[14]
Siyuan Chen and Julien Epps. 2014. Efficient and Robust Pupil Size and Blink Estimation From Near-Field Video Sequences for Human--Machine Interaction. IEEE Trans. Cybern. 44, 12 (December 2014), 2356--2367.
[15]
Cristina Conati, Sébastien Lallé, Md. Abed Rahman, and Dereck Toker. 2017. Further Results on Predicting Cognitive Abilities for Adaptive Visualizations. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. AAAI Press, Melbourne, Australia, 1568--1574.
[16]
Leana Copeland, Tom Gedeon, and Sabrina Caldwell. 2015. Effects of text difficulty and readers on predicting reading comprehension from eye movements. In 2015 6th IEEE International Conference on Cognitive Infocommunications (CogInfoCom). 407--412.
[17]
Leana Copeland, Tom Gedeon, and B. Sumudu U. Mendis. 2014. Predicting reading comprehension scores from eye movements using artificial neural networks and fuzzy output error. Artif Intell Res. 3, 3 (2014), 35--48.
[18]
Scott A. Crossley, Stephen Skalicky, Mihai Dascalu, Danielle S. McNamara, and Kristopher Kyle. 2017. Predicting Text Comprehension, Processing, and Familiarity in Adult Readers: New Approaches to Readability Formulas. Discourse Process. 54, 5--6 (July 2017), 340--359.
[19]
Sidney D'Mello, Jonathan Cobian, and Matthew Hunter. 2013. Automatic Gaze-Based Detection of Mind Wandering during Reading. In Proceedings of the 6th International Conference on Educational Data Mining. Memphis, TN, USA, 364--365.
[20]
Sidney D'Mello, Catlin Mills, Robert Bixler, and Nigel Bosch. 2017. Zone out no more: Mitigating mind wandering during computerized reading. In Proceedings of the 10th International Conference on Educational Data Mining. Wuhan, China, 8--15.
[21]
Sidney D'Mello, Andrew Olney, Claire Williams, and Patrick Hays. 2012. Gaze tutor: A gaze-reactive intelligent tutoring system. Int. J. Hum.-Comput. Stud. 70, 5 (2012), 377--398.
[22]
Shahram Eivazi and Roman Bednarik. 2011. Predicting Problem-Solving Behavior and Performance Levels from Visual Attention Data. In Proceedings of the 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction, in conjunction with IUI 2011. ACM, Palo Alto, CA, USA, 9--16.
[23]
Nir Friedman, Tomer Fekete, Kobi Gal, and Oren Shriki. 2019. EEG-Based Prediction of Cognitive Load in Intelligence Tests. Front. Hum. Neurosci. 13, (2019).
[24]
Matthew Gingerich and Cristina Conati. 2015. Constructing Models of User and Task Characteristics from Eye Gaze Data for User-Adaptive Information Highlighting. In Proceedings of the 29th Conference on Artificial Intelligence. AAAI Press, Austin, TX, USA, 1728-- 1734.
[25]
Fabian Göbel, Peter Kiefer, Ioannis Giannopoulos, Andrew T. Duchowski, and Martin Raubal. 2018. Improving Map Reading with Gaze-adaptive Legends. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications (ETRA '18). ACM, New York, NY, USA, 29:1-- 29:9.
[26]
Shamsi T. Iqbal, Piotr D. Adamczyk, Xianjun Sam Zheng, and Brian P. Bailey. 2005. Towards an index of opportunity: Understanding changes in mental workload during task execution. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Portland, OR, USA, 311--320.
[27]
Young-Min Jang, Rammohan Mallipeddi, and Minho Lee. 2014. Identification of human implicit visual search intention based on eye movement and pupillary analysis. User Model. User-Adapt. Interact. 24, 4 (October 2014), 315-- 344.
[28]
Natasha Jaques, Cristina Conati, Jason M. Harley, and Roger Azevedo. 2014. Predicting Affect from Gaze Data during Interaction with an Intelligent Tutoring System. In Proceedings of the 12th International Conference on Intelligent Tutoring Systems. Springer, Honolulu, HI, USA, 29--38.
[29]
Halszka Jarodzka, Hans Gruber, and Kenneth Holmqvist. 2017. Eye tracking in Educational Science: theoretical frameworks and research agendas. (2017).
[30]
Marcel Adam Just and Patricia Ann Carpenter. 1987. The psychology of reading and language comprehension. Allyn & Bacon.
[31]
Samad Kardan and Cristina Conati. 2012. Exploring Gaze Data for Determining User Learning with an Interactive Simulation. In Proceedings on the International Conference on User Modeling, Adaptation, and Personalization (Lecture Notes in Computer Science). Springer Berlin Heidelberg, 126--138.
[32]
M. Kuhn. 2008. Building predictive models in R using the caret package. J. Stat. Softw. 28, 5 (2008), 1--26.
[33]
Sébastien Lallé, Cristina Conati, and Roger Azevedo. 2018. Prediction of Student Achievement Goals and Emotion Valence during Interaction with Pedagogical Agents. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems. IFAAMAS, Stockholm, Sweden, 1222--1231.
[34]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Predicting confusion in information visualization from eye tracking and interaction data. In Proceedings on the 25th International Joint Conference on Artificial Intelligence. AAAI Press, New York, NY, USA, 2529--2535.
[35]
Sébastien Lallé, Cristina Conati, and Giuseppe Carenini. 2016. Prediction of individual learning curves across information visualizations. User Model. User-Adapt. Interact. 26, 4 (2016), 307--345.
[36]
Sébastien Lallé, Dereck Toker, and Cristina Conati. 2019. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Trans. Vis. Comput. Graph. (2019), 1--12.
[37]
Jason Lankow, Josh Ritchie, and Ross Crooks. 2012. Infographics: the power of visual storytelling. John Wiley & Sons, Inc, Hoboken, N.J.
[38]
Päivi Majaranta and Andreas Bulling. 2014. Eye tracking and eye-based human--computer interaction. In Advances in physiological computing. Springer, 39--65.
[39]
Lucia Mason, Patrik Pluchino, Maria Caterina Tornatora, and Nicola Ariasi. 2013. An Eye-Tracking Study of Learning From Science Text With Concrete and Abstract Illustrations. J. Exp. Educ. 81, 3 (July 2013), 356--384.
[40]
Lucia Mason, Maria Caterina Tornatora, and Patrik Pluchino. 2013. Do fourth graders integrate text and picture in processing and learning from an illustrated science text? Evidence from eye-movement patterns. Comput. Educ. 60, 1 (January 2013), 95--109.
[41]
Paul Meara. 2010. EFL Vocabulary Tests (second edition ed.). Lognostics, Swansea: Wales.
[42]
Paul Meara and Glyn Jones. 1990. Eurocentres Vocabulary Size Test 10KA. Eurocentres Learning Service, Zurich.
[43]
Caitlin Mills, Robert Bixler, Xinyi Wang, and Sidney K. D'Mello. 2016. Automatic Gaze-Based Detection of Mind Wandering during Narrative Film Comprehension. International Educational Data Mining Society.
[44]
Alex Mott, Joshua Job, Jean-Roch Vlimant, Daniel Lidar, and Maria Spiropulu. 2017. Solving a Higgs optimization problem with quantum annealing for machine learning. Nature 550, 7676 (October 2017), 375--379.
[45]
Tamara Munzner. 2014. Visualization analysis and design. CRC press.
[46]
Alvitta Ottley, Huahai Yang, and Remco Chang. 2015. Personality as a Predictor of User Strategy: How Locus of Control Affects Search Strategies on Tree Visualizations. In Proceedings of the 33rd Annual Conference on Human Factors in Computing Systems. ACM, Seoul, Korea, 3251-- 3254.
[47]
Luc Paquette, Jonathan Rowe, Ryan Baker, Bradford Mott, James Lester, Jeanine DeFalco, Keith Brawner, Robert Sottilare, and Vasiliki Georgoulas. 2016. Sensor-Free or Sensor-Full: A Comparison of Data Modalities in MultiChannel Affect Detection. Int. Educ. Data Min. Soc. (2016).
[48]
Ramkumar Rajendran, Anurag Kumar, Kelly E. Carter, Daniel T. Levin, and Gautam Biswas. 2018. Predicting Learning by Analyzing Eye-Gaze Data of Reading Behavior. International Educational Data Mining Society.
[49]
George E. Raptis, Christina Katsini, Marios Belk, Christos Fidas, George Samaras, and Nikolaos Avouris. 2017. Using Eye Gaze Data and Visual Activities to Infer Human Cognitive Styles: Method and Feasibility Studies. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization (UMAP '17). Association for Computing Machinery, New York, NY, USA, 164--173.
[50]
Keith Rayner, Timothy J. Slattery, and Nathalie N. Bélanger. 2010. Eye movements, the perceptual span, and reading speed. Psychon. Bull. Rev. 17, 6 (December 2010), 834--839.
[51]
E Segel and J Heer. 2010. Narrative Visualization: Telling Stories with Data. IEEE Trans. Vis. Comput. Graph. 16, 6 (2010), 1139--1148.
[52]
Nelson Silva, Tobias Schreck, Eduardo Veas, Vedran Sabol, Eva Eggeling, and Dieter W. Fellner. 2018. Leveraging eyegaze and time-series features to predict user interests and build a recommendation model for visual analysis. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications. ACM, Warsaw, Poland, 13:1--13:9.
[53]
Ronal Singh, Tim Miller, Joshua Newn, Eduardo Velloso, Frank Vetere, and Liz Sonenberg. 2020. Combining gaze and AI planning for online human intention recognition. Artif. Intell. 284, (July 2020), 103275.
[54]
Heyjin Song and Nammee Moon. 2018. A Preference Based Recommendation System Design Through Eye-Tracking and Social Behavior Analysis. In Advances in Computer Science and Ubiquitous Computing (Lecture Notes in Electrical Engineering). Springer, Singapore, 1014--1019.
[55]
Ben Steichen, Giuseppe Carenini, and Cristina Conati. 2013. User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities. In Proceedings of the 2013 international conference on Intelligent user interfaces (IUI '13). ACM, New York, NY, USA, 317--328.
[56]
Ben Steichen, Cristina Conati, and Giuseppe Carenini. 2014. Inferring Visualization Task Properties, User Performance, and User Cognitive Abilities from Eye Gaze Data. ACM Trans. Interact. Intell. Syst. 4, 2 (2014), Article 11.
[57]
Ben Steichen, Bo Fu, and Tho Nguyen. 2020. Inferring Cognitive Style from Eye Gaze Behavior During Information Visualization Usage. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '20). Association for Computing Machinery, New York, NY, USA, 348--352.
[58]
Dereck Toker and Cristina Conati. 2014. Eye tracking to understand user differences in visualization processing with highlighting interventions. In Proceedings of the 22nd International Conference on User Modeling, Adaptation, and Personalization. Springer, Aalborg, Denmark, 219--230.
[59]
Dereck Toker, Cristina Conati, and Giuseppe Carenini. 2019. Gaze analysis of user characteristics in magazine style narrative visualizations. User Model. User-Adapt. Interact. to appear, (2019).
[60]
Marilyn L Turner and Randall W Engle. 1989. Is working memory capacity task dependent? J. Mem. Lang. 28, 2 (April 1989), 127--154.
[61]
Geoffrey Underwood, Alison Hubbard, and Howard Wilkinson. 1990. Eye Fixations Predict Reading Comprehension: The Relationships between Reading Skill, Reading Speed, and Visual Inspection. Lang. Speech 33, 1 (January 1990), 69--81.
[62]
M.C. Velez, D. Silver, and M. Tremaine. 2005. Understanding visualization through spatial ability differences. In Proceedings of the IEEE Conference on Visualization. IEEE, Minneapolis, MN, USA, 511--518.
[63]
Suowei Wu, Zhengyin Du, Weixin Li, Di Huang, and Yunhong Wang. 2019. Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze. In 2019 International Conference on Multimodal Interaction (ICMI '19). Association for Computing Machinery, Suzhou, China, 40--48.
[64]
Zehui Zhan, Lei Zhang, Hu Mei, and Patrick S. W. Fong. 2016. Online Learners? Reading Ability Detection Based on Eye-Tracking Sensors. Sensors 16, 9 (September 2016), 1457.
[65]
W. Zheng, B. Dong, and B. Lu. 2014. Multimodal emotion recognition using EEG and eye tracking data. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, Chicago, IL, USA, 5040--5043.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
October 2020
920 pages
ISBN:9781450375818
DOI:10.1145/3382507
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adaptive visualizations
  2. classification
  3. eye-tracking
  4. narrative visualizations
  5. user characteristics
  6. user model

Qualifiers

  • Research-article

Funding Sources

Conference

ICMI '20
Sponsor:
ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 25 - 29, 2020
Virtual Event, Netherlands

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)88
  • Downloads (Last 6 weeks)9
Reflects downloads up to 29 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media