skip to main content
research-article
Open access

Understanding the Impact of the Reality-Virtuality Continuum on Visual Search Using Fixation-Related Potentials and Eye Tracking Features

Published: 24 September 2024 Publication History

Abstract

While Mixed Reality allows the seamless blending of digital content in users' surroundings, it is unclear if its fusion with physical information impacts users' perceptual and cognitive resources differently. While the fusion of digital and physical objects provides numerous opportunities to present additional information, it also introduces undesirable side effects, such as split attention and increased visual complexity. We conducted a visual search study in three manifestations of mixed reality (Augmented Reality, Augmented Virtuality, Virtual Reality) to understand the effects of the environment on visual search behavior. We conducted a multimodal evaluation measuring Fixation-Related Potentials (FRPs), alongside eye tracking to assess search efficiency, attention allocation, and behavioral measures. Our findings indicate distinct patterns in FRPs and eye-tracking data that reflect varying cognitive demands across environments. Specifically, AR environments were associated with increased workload, as indicated by decreased FRP - P3 amplitudes and more scattered eye movement patterns, impairing users' ability to identify target information efficiently. Participants reported AR as the most demanding and distracting environment. These insights inform design implications for MR adaptive systems, emphasizing the need for interfaces that dynamically respond to user cognitive load based on physiological inputs.

References

[1]
Yomna Abdelrahman, Anam Ahmad Khan, Joshua Newn, Eduardo Velloso, Sherine Ashraf Safwat, James Bailey, Andreas Bulling, Frank Vetere, and Albrecht Schmidt. 2019. Classifying Attention Types with Thermal Imaging and Eye Tracking. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article 69 (sep 2019), 27 pages. https://doi.org/10.1145/3351227
[2]
Magdalena Andrzejewska and Anna Stoli'ska. 2016. Comparing the difficulty of tasks using eye tracking combined with subjective and behavioural criteria. Journal of Eye Movement Research 9, 3 (2016), 16. https://doi.org/10.16910/ jemr.9.3.3
[3]
Janette Atkinson and Oliver J Braddick. 1989. 'Where'and 'what'in visual search. Perception 18, 2 (1989), 181--189. https://doi.org/10.1068/p180181
[4]
Jonas Auda, Uwe Gruenefeld, Sarah Faltaous, Sven Mayer, and Stefan Schneegass. 2023. A Scoping Survey on Cross-Reality Systems. ACM Comput. Surv. 56, 4, Article 83 (oct 2023), 38 pages. https://doi.org/10.1145/3616536
[5]
Ronald T Azuma. 1997. A survey of augmented reality. Presence: teleoperators & virtual environments 6, 4 (1997), 355--385. https://doi.org/10.1162/pres.1997.6.4.355
[6]
R Harald Baayen, Douglas J Davidson, and DouglasMBates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of memory and language 59, 4 (2008), 390--412. https://doi.org/10.1016/j.jml.2007.12.005
[7]
Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2014. Fitting linear mixed-effects models using lme4. arXiv preprint arXiv:1406.5823 (2014). https://doi.org/10.48550/arXiv.1406.5823
[8]
Jackson Beatty. 1982. Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological bulletin 91, 2 (1982), 276. https://doi.org/10.1037/0033--2909.91.2.276
[9]
Jaap A Beintema, Editha M van Loon, and Albert V van den Berg. 2005. Manipulating saccadic decision-rate distributions in visual search. Journal of vision 5, 3 (2005), 1--1. https://doi.org/10.1167/5.3.1
[10]
Guillermo Bernal, Nelson Hidalgo, Conor Russomanno, and Pattie Maes. 2022. Galea: A physiological sensing system for behavioral research in virtual environments. In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 66--76. https://doi.org/10.1109/VR51125.2022.00024
[11]
Nima Bigdely-Shamlo, Tim Mullen, Christian Kothe, Kyung-Min Su, and Kay A Robbins. 2015. The PREP pipeline: standardized preprocessing for large-scale EEG analysis. Frontiers in neuroinformatics 9 (2015), 16. https://doi.org/10. 3389/fninf.2015.00016
[12]
Oliver Bimber and Ramesh Raskar. 2006. Modern approaches to augmented reality. In ACM SIGGRAPH 2006 Courses (Boston, Massachusetts) (SIGGRAPH '06). Association for Computing Machinery, New York, NY, USA, 1--es. https://doi.org/10.1145/1185657.1185796
[13]
Frank Biocca, Charles Owen, Arthur Tang, and Corey Bohil. 2007. Attention issues in spatial information systems: Directing mobile users? visual attention using augmented reality. Journal of Management Information Systems 23, 4 (2007), 163--184. https://doi.org/10.2753/MIS0742--1222230408
[14]
Frank Biocca, Arthur Tang, Charles Owen, and Fan Xiao. 2006. Attention Funnel: Omnidirectional 3D Cursor for Mobile Augmented Reality Platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada) (CHI '06). Association for Computing Machinery, New York, NY, USA, 1115--1122. https://doi.org/10.1145/1124772.1124939
[15]
Leonardo Bonanni, Chia-Hsun Lee, and Ted Selker. 2005. Attention-based design of augmented reality interfaces. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA) (CHI EA '05). Association for Computing Machinery, New York, NY, USA, 1228--1231. https://doi.org/10.1145/1056808.1056883
[16]
Jürgen Bortz and Nicola Döring. 2007. Forschungsmethoden und Evaluation für Human-und Sozialwissenschaftler: Limitierte Sonderausgabe. Springer-Verlag. https://doi.org/10.1007/978--3--540--33306--7
[17]
Riccardo Bovo, Daniele Giunchi, Ludwig Sidenmark, Joshua Newn, Hans Gellersen, Enrico Costanza, and Thomas Heinis. 2023. Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI'23). Association for Computing Machinery, New York, NY, USA, Article 162, 18 pages. https://doi.org/10.1145/3544548.3581283
[18]
Doug A Bowman, Sabine Coquillart, Bernd Froehlich, Michitaka Hirose, Yoshifumi Kitamura, Kiyoshi Kiyokawa, and Wolfgang Stuerzlinger. 2008. 3d user interfaces: New directions and perspectives. IEEE computer graphics and applications 28, 6 (2008), 20--36. https://doi.org/10.1109/MCG.2008.109
[19]
Cynthia A Brewer. 1996. Prediction of simultaneous contrast between map colors with Hunt's model of color appearance. Color Research & Application 21, 3 (1996), 221--235. https://doi.org/10.1002/(SICI)1520--6378(199606)21:3<221::AID-COL4>3.0.CO;2-U
[20]
Anne-Marie Brouwer, Maarten A Hogervorst, Bob Oudejans, Anthony J Ries, and Jonathan Touryan. 2017. EEG and eye tracking signatures of target encoding during structured visual search. Frontiers in human neuroscience 11 (2017), 264. https://doi.org/10.3389/fnhum.2017.00264
[21]
Anne-Marie Brouwer, Boris Reuderink, Joris Vincent, Marcel AJ van Gerven, and Jan BF van Erp. 2013. Distinguishing between target and nontarget fixations in a visual search task using fixation-related potentials. Journal of vision 13, 3 (2013), 17--17. https://doi.org/10.1167/13.3.17
[22]
Andreas Butz, Michael Schneider, and Mira Spassova. 2004. SearchLight -- A Lightweight Search Function for Pervasive Environments. In Pervasive Computing, Alois Ferscha and Friedemann Mattern (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 351--356. https://doi.org/10.1007/978--3--540--24646--6_26
[23]
Marisa Carrasco, Denise L Evert, Irene Chang, and Svetlana M Katz. 1995. The eccentricity effect: Target eccentricity affects performance on conjunction searches. Perception & psychophysics 57 (1995), 1241--1261.
[24]
Benjamin T Carter and Steven G Luke. 2020. Best practices in eye tracking research. International Journal of Psychophysiology 155 (2020), 49--62. https://doi.org/10.1016/j.ijpsycho.2020.05.010
[25]
Rebecca L. Charles and Jim Nixon. 2019. Measuring mental workload using physiological measures: A systematic review. Applied Ergonomics 74 (2019), 221--232. https://doi.org/10.1016/j.apergo.2018.08.028
[26]
Siyuan Chen, Julien Epps, Natalie Ruiz, and Fang Chen. 2011. Eye Activity as a Measure of Human Mental Effort in HCI. In Proceedings of the 16th International Conference on Intelligent User Interfaces (Palo Alto, CA, USA) (IUI '11). Association for Computing Machinery, New York, NY, USA, 315--318. https://doi.org/10.1145/1943403.1943454
[27]
Taizhou Chen, Yi-Shiun Wu, and Kening Zhu. 2018. Investigating Different Modalities of Directional Cues for Multi-Task Visual-Searching Scenario in Virtual Reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST '18). Association for Computing Machinery, New York, NY, USA, Article 41, 5 pages. https://doi.org/10.1145/3281505.3281516
[28]
Xiaojiao Chen, Xiaoteng Tang, Ying Zhao, Tengyu Huang, Ran Qian, Jiayi Zhang, Wei Chen, and Xiaosong Wang. 2023. Evaluating Visual Consistency of Icon Usage in Across-Devices. International Journal of Human--Computer Interaction (2023), 1--17. https://doi.org/10.1080/10447318.2022.2162275
[29]
Xin Chen and Gregory J. Zelinsky. 2006. Real-world visual search is dominated by top-down guidance. Vision Research 46, 24 (2006), 4118--4133. https://doi.org/10.1016/j.visres.2006.08.008
[30]
Yifei Cheng, Yukang Yan, Xin Yi, Yuanchun Shi, and David Lindlbauer. 2021. Semanticadapt: Optimization-based adaptation of mixed reality layouts leveraging virtual-physical semantic connections. In The 34th Annual ACM Symposium on User Interface Software and Technology. 282--297. https://doi.org/10.1145/3472749.3474750
[31]
Yi Fei Cheng, Hang Yin, Yukang Yan, Jan Gugenheimer, and David Lindlbauer. 2022. Towards Understanding Diminished Reality. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (, New Orleans, LA, USA,) (CHI'22). Association for Computing Machinery, New York, NY, USA, Article 549, 16 pages. https://doi.org/10.1145/3491102.3517452
[32]
Francesco Chiossi, Luke Haliburton, Changkun Ou, Andreas Martin Butz, and Albrecht Schmidt. 2023. Short-Form Videos Degrade Our Capacity to Retain Intentions: Effect of Context Switching On Prospective Memory. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI'23). Association for Computing Machinery, New York, NY, USA, Article 30, 15 pages. https://doi.org/10.1145/3544548.3580778
[33]
Francesco Chiossi, Changkun Ou, and Sven Mayer. 2023. Exploring Physiological Correlates of Visual Complexity Adaptation: Insights from EDA, ECG, and EEG Data for Adaptation Evaluation in VR Adaptive Systems. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI EA'23). Association for Computing Machinery, New York, NY, USA, Article 118, 7 pages. https://doi.org/10.1145/3544549. 3585624
[34]
Francesco Chiossi, Changkun Ou, and Sven Mayer. 2024. Optimizing Visual Complexity for Physiologically-Adaptive VR Systems: Evaluating a Multimodal Dataset using EDA, ECG and EEG Features. In Proceedings of the 2024 International Conference on Advanced Visual Interfaces (Arenzano, Genoa, Italy) (AVI '24). Association for Computing Machinery, New York, NY, USA, Article 25, 9 pages. https://doi.org/10.1145/3656650.3656657
[35]
Francesco Chiossi, Yagiz Turgut, Robin Welsch, and Sven Mayer. 2023. Adapting Visual Complexity Based on Electrodermal Activity Improves Working Memory Performance in Virtual Reality. Proc. ACM Hum.-Comput. Interact. 7, MHCI, Article 196 (sep 2023), 26 pages. https://doi.org/10.1145/3604243
[36]
Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis Chuang, and Sven Mayer. 2022. Virtual reality adaptation using electrodermal activity to support the user experience. Big Data and Cognitive Computing 6, 2 (2022), 55. https://doi.org/10.3390/bdcc6020055
[37]
Emily A Cooper. 2023. The Perceptual Science of Augmented Reality. Annual Review of Vision Science 9 (2023). https://doi.org/10.1146/annurev-vision-111022--123758
[38]
Brendan David-John, Candace Peacock, Ting Zhang, T Scott Murdison, Hrvoje Benko, and Tanya R Jonker. 2021. Towards gaze-based prediction of the intent to interact in virtual reality. In ACM Symposium on Eye Tracking Research and Applications. 1--7.
[39]
Daniele De Massari, Daniel Pacheco, Rahim Malekshahi, Alberto Betella, Paul FMJ Verschure, Niels Birbaumer, and Andrea Caria. 2014. Fast mental states decoding in mixed reality. Frontiers in behavioral neuroscience 8 (2014), 415. https://doi.org/10.3389/fnbeh.2014.00415
[40]
Rasha Deeb, Kristien Ooms, Al?beta Brychtová, Veerle Van Eetvelde, and Philippe De Maeyer. 2015. Background and foreground interaction: Influence of complementary colors on the search task. Color Research & Application 40, 5 (2015), 437--445. https://doi.org/10.1002/col.21920
[41]
Jessyca L Derby and Barbara S Chaparro. 2022. Evaluating the Usability of Augmented Reality (AR) Applications: The Impact of The Device Used. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 66. SAGE Publications Sage CA: Los Angeles, CA, 1355--1359. https://doi.org/10.1177/1071181322661431
[42]
Arindam Dey, Alex Chatburn, and Mark Billinghurst. 2019. Exploration of an EEG-based cognitively adaptive training system in virtual reality. In 2019 ieee conference on virtual reality and 3d user interfaces (vr). IEEE, New York, NY, USA, 220--226. https://doi.org/10.1109/VR.2019.8797840
[43]
Benjamin J Dixon, Michael J Daly, Harley Chan, Allan D Vescan, Ian J Witterick, and Jonathan C Irish. 2013. Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surgical endoscopy 27 (2013), 454--461. https://doi.org/10.1007/s00464-012--2457--3
[44]
Jayfus T Doswell and Anna Skinner. 2014. Augmenting human cognition with adaptive augmented reality. In Foundations of Augmented Cognition. Advancing Human Performance and Decision-Making through Adaptive Systems: 8th International Conference, AC 2014, Held as Part of HCI International 2014, Heraklion, Crete, Greece, June 22--27, 2014. Proceedings 8. Springer, 104--113. https://doi.org/10.1007/978--3--319-07527--3_10
[45]
Andrew T. Duchowski, Krzysztof Krejtz, Izabela Krejtz, Cezary Biele, Anna Niedzielska, Peter Kiefer, Martin Raubal, and Ioannis Giannopoulos. 2018. The Index of Pupillary Activity: Measuring Cognitive Load Vis-à-Vis Task Difficulty with Pupil Oscillation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/ 3173574.3173856
[46]
John Duncan and Glyn W Humphreys. 1989. Visual search and stimulus similarity. Psychological review 96, 3 (1989), 433. https://doi.org/10.1037/0033--295X.96.3.433
[47]
Christopher M. Durugbo. 2021. Eye tracking for work-related visual search: a cognitive task analysis. Ergonomics 64, 2 (2021), 225--240. https://doi.org/10.1080/00140139.2020.1822547
[48]
Miguel P Eckstein. 2011. Visual search: A retrospective. Journal of vision 11, 5 (2011), 14--14. https://doi.org/10.1167/ 11.5.14
[49]
Miguel P. Eckstein. 2011. Visual search: A retrospective. Journal of Vision 11, 5 (12 2011), 14--14. https://doi.org/10. 1167/11.5.14
[50]
Lisa A. Elkin, Matthew Kay, James J. Higgins, and Jacob O. Wobbrock. 2021. An Aligned Rank Transform Procedure for Multifactor Contrast Tests. In The 34th Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST'21). Association for Computing Machinery, New York, NY, USA, 754--768. https: //doi.org/10.1145/3472749.3474784
[51]
Ralf Engbert and Reinhold Kliegl. 2003. Microsaccades uncover the orientation of covert attention. Vision research 43, 9 (2003), 1035--1045. https://doi.org/10.1016/S0042--6989(03)00084--1
[52]
Robin Eyraud, Elisabetta Zibetti, and Thierry Baccino. 2015. Allocation of visual attention while driving with simulated augmented reality. Transportation research part F: traffic psychology and behaviour 32 (2015), 46--55. https://doi.org/10.1016/j.trf.2015.04.011
[53]
Martin Feick, Niko Kleer, Anthony Tang, and Antonio Krüger. 2020. The Virtual Reality Questionnaire Toolkit (UIST'20 Adjunct). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3379350.3416188
[54]
Patrique Fiedler, Carlos Fonseca, Eko Supriyanto, Frank Zanow, and Jens Haueisen. 2022. A high-density 256-channel cap for dry electroencephalography. Human brain mapping 43, 4 (2022), 1295--1308. https://doi.org/10.1002/hbm.25721
[55]
Andrea Finke, Kai Essig, Giuseppe Marchioro, and Helge Ritter. 2016. Toward FRP-based brain-machine interfaces?single-trial classification of fixation-related potentials. PloS one 11, 1 (2016), e0146848. https: //doi.org/10.1371/journal.pone.0146848
[56]
Norman Forschack, Christopher Gundlach, Steven Hillyard, and Matthias M Müller. 2022. Dynamics of attentional allocation to targets and distractors during visual search. NeuroImage 264 (2022), 119759. https://doi.org/10.1016/j. neuroimage.2022.119759
[57]
Jérémy Frey, Maxime Daniel, Julien Castet, Martin Hachet, and Fabien Lotte. 2016. Framework for electroencephalography-based evaluation of user experience. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 2283--2294. https://doi.org/10.1145/2858036.2858525
[58]
Hind Gacem, Gilles Bailly, James Eagan, and Eric Lecolinet. 2015. Finding Objects Faster in Dense Environments Using a Projection Augmented Robotic Arm. In Human-Computer Interaction -- INTERACT 2015, Julio Abascal, Simone Barbosa, Mirko Fetter, Tom Gross, Philippe Palanque, and Marco Winckler (Eds.). Springer International Publishing, Cham, 221--238. https://doi.org/10.1007/978--3--319--22698--9_15
[59]
Elena Gherri and Martin Eimer. 2011. Active listening impairs visual perception and selectivity: An ERP study of auditory dual-task costs on visual attention. Journal of cognitive neuroscience 23, 4 (2011), 832--844. https: //doi.org/10.1162/jocn.2010.21468
[60]
Joseph H. Goldberg, Mark J. Stimson, Marion Lewenstein, Neil Scott, and Anna M. Wichansky. 2002. Eye Tracking in Web Search Tasks: Design Implications. In Proceedings of the 2002 Symposium on Eye Tracking Research & Applications (New Orleans, Louisiana) (ETRA '02). Association for Computing Machinery, New York, NY, USA, 51--58. https: //doi.org/10.1145/507072.507082
[61]
Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A Engemann, Daniel Strohmeier, Christian Brodbeck, Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, et al. 2013. MEG and EEG data analysis with MNE-Python. Frontiers in neuroscience (2013), 267. https://doi.org/10.3389/fnins.2013.00267
[62]
Iris IA Groen, Sennay Ghebreab, Hielke Prins, Victor AF Lamme, and H Steven Scholte. 2013. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category. Journal of Neuroscience 33, 48 (2013), 18814--18824.
[63]
Uwe Gruenefeld, Jonas Auda, Florian Mathis, Stefan Schneegass, Mohamed Khamis, Jan Gugenheimer, and Sven Mayer. 2022. VRception: Rapid Prototyping of Cross-Reality Systems in Virtual Reality. In CHI Conference on Human Factors in Computing Systems. 1--15. https://doi.org/10.1145/3491102.3501821
[64]
Uwe Gruenefeld, Tim Claudius Stratmann, Abdallah El Ali, Susanne Boll, and Wilko Heuten. 2018. RadialLight: Exploring Radial Peripheral LEDs for Directional Cues in Head-Mounted Displays. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (Barcelona, Spain) (MobileHCI '18). Association for Computing Machinery, New York, NY, USA, Article 39, 6 pages. https://doi.org/10.1145/3229434.3229437
[65]
Jacob Hadnett-Hunter, Eamonn O'Neill, and Michael J Proulx. 2022. Contributed session ii: Visual search in virtual reality (vsvr): A visual search toolbox for virtual reality. Journal of Vision 22, 3 (2022), 19--19. https://doi.org/10.1167/ jov.22.3.19
[66]
Sandra G Hart. 2006. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 50. Sage publications Sage CA: Los Angeles, CA, 904--908. https: //doi.org/10.1177/154193120605000909
[67]
Sandra G Hart and Lowell E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in psychology. Vol. 52. Elsevier, 139--183. https://doi.org/10.1016/S0166- 4115(08)62386--9
[68]
Thomas Haslwanter. 1995. Mathematics of three-dimensional eye rotations. Vision Research 35, 12 (1995), 1727--1739. https://doi.org/10.1016/0042--6989(94)00257-M
[69]
Larry V Hedges. 2007. Effect sizes in cluster-randomized designs. (2007). https://doi.org/10.3102/1076998606298
[70]
Eckhard H Hess and James M Polt. 1964. Pupil size in relation to mental activity during simple problem-solving. Science 143, 3611 (1964), 1190--1192. https://doi.org/10.1126/science.143.3611.119
[71]
Anne P Hillstrom. 2000. Repetition effects in visual search. Perception & psychophysics 62 (2000), 800--817. https: //doi.org/10.3758/BF03206924
[72]
Mauricio Hincapié, Andrea Caponio, Horacio Rios, and Eduardo González Mendívil. 2011. An introduction to Augmented Reality with applications in aeronautical maintenance. In 2011 13th international conference on transparent optical networks. IEEE, New York, NY, USA, 1--4. https://doi.org/10.1109/ICTON.2011.5970856
[73]
Daichi Hirobe, Yuki Uranishi, Jason Orlosky, Shizuka Shirai, Photchara Ratsamee, and Haruo Takemura. 2022. Objective Measurements of Background Color Shifts Caused by Optical See-Through Head-Mounted Displays. In 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, New York, NY, USA, 389--390. https://doi.org/10.1109/ISMAR-Adjunct57072.2022.00084
[74]
Andrew Hollingworth and Brett Bahle. 2020. Eye Tracking in Visual Search Experiments. Springer US, New York, NY, 23--35. https://doi.org/10.1007/7657_2019_30
[75]
Sebastian Hubenschmid, Jonathan Wieland, Daniel Immanuel Fink, Andrea Batch, Johannes Zagermann, Niklas Elmqvist, and Harald Reiterer. 2022. ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI'22). Association for Computing Machinery, New York, NY, USA, Article 24, 20 pages. https://doi.org/10.1145/3491102.3517550
[76]
Johan Hulleman. 2020. Quantitative and qualitative differences in the top-down guiding attributes of visual search. Journal of Experimental Psychology: Human Perception and Performance 46, 9 (2020), 942. https://doi.org/10.1037/ xhp0000764
[77]
Muhammad Hussain and Jaehyun Park. 2023. Effect of Transparency Levels and Real-World Backgrounds on the User Interface in Augmented Reality Environments. International Journal of Human--Computer Interaction (2023), 1--10. https://doi.org/10.1080/10447318.2023.2212218
[78]
Mainak Jas, Denis A Engemann, Yousra Bekhti, Federico Raimondo, and Alexandre Gramfort. 2017. Autoreject: Automated artifact rejection for MEG and EEG data. NeuroImage 159 (2017), 417--429. https://doi.org/j.neuroimage.2017.06.030
[79]
Gouri Jha, Lavanya shm Sharma, and Shailja Gupta. 2021. Future of augmented reality in healthcare department. In Proceedings of Second International Conference on Computing, Communications, and Cyber-Security: IC4S 2020. Springer, 667--678. https://doi.org/10.1007/978--981--16-0733--2_47
[80]
Brendan John, Pallavi Raiturkar, Arunava Banerjee, and Eakta Jain. 2018. An Evaluation of Pupillary Light Response Models for 2D Screens and VR HMDs. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology (Tokyo, Japan) (VRST '18). Association for Computing Machinery, New York, NY, USA, Article 19, 11 pages. https://doi.org/10.1145/3281505.3281538
[81]
Jeffrey S Johnson and Bruno A Olshausen. 2003. Timecourse of neural signatures of object recognition. Journal of vision 3, 7 (2003), 4--4. https://doi.org/10.1167/3.7.4
[82]
David W Jr Hosmer, Stanley Lemeshow, and RX Sturdivant. 2004. Applied logistic regression. Hoboken: John Wiley & Sons (2004). https://doi.org/10.1002/9781118548387
[83]
Marcel Adam Just and Patricia A Carpenter. 1976. Eye fixations and cognitive processes. Cognitive Psychology 8, 4 (1976), 441--480. https://doi.org/10.1016/0010-0285(76)90015--3
[84]
Daniel Kahneman and Jackson Beatty. 1966. Pupil Diameter and Load on Memory. Science 154, 3756 (1966), 1583--1585. https://doi.org/10.1126/science.154.3756.1583
[85]
Juan E. Kamienkowski, Matias J. Ison, Rodrigo Quian Quiroga, and Mariano Sigman. 2012. Fixation-related potentials in visual search: A combined EEG and eye tracking study. Journal of Vision 12, 7 (07 2012), 4--4. https://doi.org/10. 1167/12.7.4
[86]
Juan E Kamienkowski, Matias J Ison, Rodrigo Quian Quiroga, and Mariano Sigman. 2012. Fixation-related potentials in visual search: A combined EEG and eye tracking study. Journal of vision 12, 7 (2012), 4--4. https://doi.org/10.1167/12.7.4
[87]
Lisandro N Kaunitz, Juan E Kamienkowski, Alexander Varatharajah, Mariano Sigman, Rodrigo Quian Quiroga, and Matias J Ison. 2014. Looking for a face in the crowd: Fixation-related potentials in an eye-movement visual search task. NeuroImage 89 (2014), 297--305. https://doi.org/10.1167/12.7.4
[88]
Peter Kiefer, Ioannis Giannopoulos, Andrew Duchowski, and Martin Raubal. 2016. Measuring Cognitive Load for Map Tasks Through Pupil Diameter. In Geographic Information Science, Jennifer A. Miller, David O'Sullivan, and Nancy Wiegand (Eds.). Springer International Publishing, Cham, 323--337. https://doi.org/10.1007/978--3--319--45738--3_21
[89]
You-Jin Kim, Radha Kumaran, Ehsan Sayyad, Anne Milner, Tom Bullock, Barry Giesbrecht, and Tobias Höllerer. 2022. Investigating Search Among Physical and Virtual Objects Under Different Lighting Conditions. IEEE Transactions on Visualization and Computer Graphics 28, 11 (2022), 3788--3798. https://doi.org/10.1109/TVCG.2022.3203093
[90]
Albert Kok. 2001. On the utility of P3 amplitude as a measure of processing capacity. Psychophysiology 38, 3 (2001), 557--577. https://doi.org/10.1017/S0048577201990559
[91]
Thomas Kosch, Mariam Hassib, Daniel Buschek, and Albrecht Schmidt. 2018. Look into My Eyes: Using Pupil Dilation to Estimate Mental Workload for Task Complexity Adaptation. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA '18). Association for Computing Machinery, New York, NY, USA, 1--6. https://doi.org/10.1145/3170427.3188643
[92]
Daniel G. Krakowczyk, David R. Reich, Jakob Chwastek, Deborah N. Jakobi, Paul Prasse, Assunta Süss, Oleksii Turuta, Pawel Kasprowski, and Lena A. Jäger. 2023. Pymovements: A Python Package for Eye Movement Data Processing. In Proceedings of the 2023 Symposium on Eye Tracking Research and Applications (Tubingen, Germany) (ETRA'23). Association for Computing Machinery, New York, NY, USA, Article 53, 2 pages. https://doi.org/10.1145/3588015. 3590134
[93]
Arni Kristjánsson. 2006. Simultaneous priming along multiple feature dimensions in a visual search task. Vision research 46, 16 (2006), 2554--2570. https://doi.org/10.1016/j.visres.2006.01.015
[94]
Daniel Lange, Tim Claudius Stratmann, Uwe Gruenefeld, and Susanne Boll. 2020. HiveFive: Immersion Preserving Attention Guidance in Virtual Reality. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI'20). Association for Computing Machinery, New York, NY, USA, 1--13. https: //doi.org/10.1145/3313831.3376803
[95]
Rebecca K Lawrence and Jay Pratt. 2022. Salience matters: Distractors may, or may not, speed target-absent searches. Attention, Perception, & Psychophysics (2022), 1--12.
[96]
Te-Won Lee, Mark Girolami, and Terrence J Sejnowski. 1999. Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural computation 11, 2 (1999), 417--441.
[97]
Adam Li, Jacob Feitelberg, Anand Prakash Saini, Richard Höchenberger, and Mathieu Scheltienne. 2022. MNEICALabel: Automatically annotating ICA components with ICLabel in Python. Journal of Open Source Software 7, 76 (2022), 4484. https://doi.org/10.21105/joss.04484
[98]
Petra Lietz. 2010. Research into questionnaire design: A summary of the literature. International journal of market research 52, 2 (2010), 249--272. https://doi.org/10.2501/S147078530920120
[99]
David Lindlbauer, Anna Maria Feit, and Otmar Hilliges. 2019. Context-aware online adaptation of mixed reality interfaces. In Proceedings of the 32nd annual ACM symposium on user interface software and technology. 147--160. https://doi.org/10.1145/3332165.3347945
[100]
Steson Lo and Sally Andrews. 2015. To transform or not to transform: Using generalized linear mixed models to analyse reaction time data. Frontiers in psychology 6 (2015), 1171. https://doi.org/10.3389/fpsyg.2015.01171
[101]
Xingyu Long, Sven Mayer, and Francesco Chiossi. 2024. Multimodal Detection of External and Internal Attention in Virtual Reality using EEG and Eye Tracking Features. In Proceedings of Mensch und Computer 2024 (MuC '24). ACM, 15. https://doi.org/10.1145/3670653.3670657
[102]
Romy Lorenz, Javier Pascual, Benjamin Blankertz, and Carmen Vidaurre. 2014. Towards a holistic assessment of the user experience with hybrid BCIs. Journal of neural engineering 11, 3 (2014), 035007. https://doi.org/10.1088/1741- 2560/11/3/035007
[103]
Steven J Luck and Steven A Hillyard. 1994. Electrophysiological correlates of feature analysis during visual search. Psychophysiology 31, 3 (1994), 291--308.
[104]
Steven J Luck and Steven A Hillyard. 1994. Spatial filtering during visual search: evidence from human electrophysiology. Journal of Experimental Psychology: Human Perception and Performance 20, 5 (1994), 1000.
[105]
Bhanuka Mahanama, Yasith Jayawardana, Sundararaman Rengarajan, Gavindya Jayawardena, Leanne Chukoskie, Joseph Snider, and Sampath Jayarathna. 2022. Eye Movement and Pupil Measures: A Review. Frontiers in Computer Science 3 (2022). https://doi.org/10.3389/fcomp.2021.733531
[106]
Vera Maljkovic and Ken Nakayama. 1994. Priming of pop-out: I. Role of features. Memory & cognition 22, 6 (1994), 657--672. https://doi.org/10.3758/BF03209251
[107]
Francesco Marini, Katherine A Breeding, and Jacqueline C Snow. 2019. Distinct visuo-motor brain dynamics for realworld objects versus planar images. Neuroimage 195 (2019), 232--242. https://doi.org/10.1016/j.neuroimage.2019.02.026
[108]
Nuno Cid Martins, Bernardo Marques, João Alves, Tiago Araújo, Paulo Dias, and Beatriz Sousa Santos. 2022. Augmented reality situated visualization in decision-making. Multimedia Tools and Applications 81, 11 (2022), 14749--14772.
[109]
Nathan Matsuda, Alex Chapiro, Yang Zhao, Clinton Smith, Romain Bachy, and Douglas Lanman. 2022. Realistic luminance in VR. In SIGGRAPH Asia 2022 Conference Papers. 1--8. https://doi.org/10.1145/3550469.3555427
[110]
Mark McGill, Daniel Boland, Roderick Murray-Smith, and Stephen Brewster. 2015. A dose of reality: Overcoming usability challenges in vr head-mounted displays. In Proceedings of the 33rd annual ACM conference on human factors in computing systems. 2143--2152. https://doi.org/10.1145/2702123.2702382
[111]
Tilo Mentler and DanielWessel. 2019. A Toolkit for Studying Attention and Reaction Times to Smartglass Messages in Conditions of Different Perceptual Load. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018) Volume V: Human Simulation and Virtual Environments, Work With Computing Systems (WWCS), Process Control 20. Springer, 595--604. https://doi.org/10.1007/978--3--319--96077--7_64
[112]
Paul Milgram and Fumio Kishino. 1994. A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems 77, 12 (1994), 1321--1329. https://doi.org/10.1.1.102.4646
[113]
Nadine M. Moacdieh and Nadine Sarter. 2017. Using Eye Tracking to Detect the Effects of Clutter on Visual Search in Real Time. IEEE Transactions on Human-Machine Systems 47, 6 (2017), 896--902. https://doi.org/10.1109/THMS.2017. 2706666
[114]
Mehrdad Naderi, Tatjana Pladere, Reinis Alksnis, and Gunta Krumina. 2023. Brain activity underlying visual search in depth when viewing volumetric multiplanar images. Scientific Reports 13, 1 (2023), 7672. https://doi.org/10.1038/s41598- 023--34758--9
[115]
Chukwuemeka Nwagu, Alaa AlSlaity, and Rita Orji. 2023. EEG-Based Brain-Computer Interactions in Immersive Virtual and Augmented Reality: A Systematic Review. Proc. ACM Hum.-Comput. Interact. 7, EICS, Article 174 (jun 2023), 33 pages. https://doi.org/10.1145/3593226
[116]
Manuel Oliva. 2019. Pupil size and search performance in low and high perceptual load. Cognitive, Affective, & Behavioral Neuroscience 19, 2 (2019), 366--376. https://doi.org/10.3758/s13415-018-00677-w
[117]
Bettina Olk, Alina Dinu, David J Zielinski, and Regis Kopper. 2018. Measuring visual search and distraction in immersive virtual reality. Royal Society open science 5, 5 (2018), 172331. https://doi.org/10.1098/rsos.172331
[118]
Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, and Peter Heeman. 2010. Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator. In Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (Austin, Texas) (ETRA '10). Association for Computing Machinery, New York, NY, USA, 141--144. https: //doi.org/10.1145/1743666.1743701
[119]
Evan M Palmer, Todd S Horowitz, Antonio Torralba, and Jeremy MWolfe. 2011. What are the shapes of response time distributions in visual search? Journal of experimental psychology: human perception and performance 37, 1 (2011), 58. https://doi.org/10.1037/a0020747
[120]
Liping Pang, Yurong Fan, Ye Deng, Xin Wang, and Tianbo Wang. 2020. Mental Workload Classification By Eye Movements In Visual Search Tasks. In 2020 13th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). 29--33. https://doi.org/10.1109/CISP-BMEI51763.2020.9263668
[121]
Orestis Papaioannou and Steven J Luck. 2020. Effects of eccentricity on the attention-related N2pc component of the event-related potential waveform. Psychophysiology 57, 5 (2020), e13532. https://doi.org/10.1111/psyp.13532
[122]
Youngmin Park, Vincent Lepetit, and Woontack Woo. 2008. Multiple 3d object tracking for augmented reality. In 2008 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE, 117--120. https://doi.org/10.1109/ ISMAR.2008.4637336
[123]
François Perrin, Jacques Pernier, Olivier Bertrand, and Jean Francois Echallier. 1989. Spherical splines for scalp potential and current density mapping. Electroencephalography and clinical neurophysiology 72, 2 (1989), 184--187. https://doi.org/10.1016/0013--4694(89)90180--6
[124]
Ken Pfeuffer, Yasmeen Abdrabou, Augusto Esteves, Radiah Rivu, Yomna Abdelrahman, Stefanie Meitner, Amr Saadi, and Florian Alt. 2021. ARtention: A design space for gaze-adaptive user interfaces in augmented reality. Computers & Graphics 95 (2021), 1--12. https://doi.org/10.1016/j.cag.2021.01.001
[125]
Luca Pion-Tonachini, Ken Kreutz-Delgado, and Scott Makeig. 2019. ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. NeuroImage 198 (2019), 181--197. https://doi.org/10.1016/j. neuroimage.2019.05.026
[126]
John Polich. 2007. Updating P300: an integrative theory of P3a and P3b. Clinical neurophysiology 118, 10 (2007), 2128--2148. https://doi.org/10.1016/j.clinph.2007.04.019
[127]
Marc Pomplun, Tyler W. Garaas, and Marisa Carrasco. 2013. The effects of task difficulty on visual search strategy in virtual 3D displays. Journal of Vision 13, 3 (08 2013), 24--24. https://doi.org/10.1167/13.3.24
[128]
Michael I Posner. 1980. Orienting of attention. Quarterly journal of experimental psychology 32, 1 (1980), 3--25. https://doi.org/10.1080/00335558008248231
[129]
Michael I. Posner and Steven E. Petersen. 1990. The Attention System of the Human Brain. Annual Review of Neuroscience 13, 1 (1990), 25--42. https://doi.org/10.1146/annurev.ne.13.030190.000325
[130]
Geoffrey F Potts, Mario Liotti, Don M Tucker, and Michael I Posner. 1996. Frontal and inferior temporal cortical activity in visual target detection: Evidence from high spatially sampled event-related potentials. Brain topography 9 (1996), 3--14. https://doi.org/10.1007/BF01191637
[131]
Nikki Pratt, Adrian Willoughby, and Diane Swick. 2011. Effects of working memory load on visual selective attention: behavioral and electrophysiological evidence. Frontiers in human neuroscience 5 (2011), 57. https: //doi.org/10.3389/fnhum.2011.00057
[132]
Felix Putze, Susanne Putze, Merle Sagehorn, Christopher Micek, and Erin T. Solovey. 2022. Understanding HCI Practices and Challenges of Experiment Reporting with Brain Signals: Towards Reproducibility and Reuse. ACM Trans. Comput.-Hum. Interact. 29, 4, Article 31 (mar 2022), 43 pages. https://doi.org/10.1145/3490554
[133]
Felix Putze, Athanasios Vourvopoulos, Anatole Lécuyer, Dean Krusienski, Sergi Bermúdez i Badia, Timothy Mullen, and Christian Herff. 2020. Brain-computer interfaces and augmented/virtual reality. Frontiers in human neuroscience 14 (2020), 144. https://doi.org/10.3389/fnhum.2020.00144
[134]
Karim Rajaei, Yalda Mohsenzadeh, Reza Ebrahimpour, and Seyed-Mahdi Khaligh-Razavi. 2019. Beyond core object recognition: Recurrent processes account for object recognition under occlusion. PLoS computational biology 15, 5 (2019), e1007001.
[135]
Natasha Anne Rappa, Susan Ledger, Timothy Teo, KokWaiWong, Brad Power, and Bruce Hilliard. 2022. The use of eye tracking technology to explore learning and performance within virtual reality and mixed reality settings: a scoping review. Interactive Learning Environments 30, 7 (2022), 1338--1350. https://doi.org/10.1080/10494820.2019.1702560
[136]
Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological bulletin 124, 3 (1998), 372. https://doi.org/10.1037/0033--2909.124.3.372
[137]
Geraint Rees, Chris Frith, and Nilli Lavie. 2001. Processing of irrelevant visual motion during performance of an auditory attention task. Neuropsychologia 39, 9 (2001), 937--949. https://doi.org/10.1016/S0028--3932(01)00016--1
[138]
Anthony J Ries, Jon Touryan, Barry Ahrens, and Patrick Connolly. 2016. The impact of task demands on fixation-related brain potentials during guided search. PloS one 11, 6 (2016), e0157260. https://doi.org/10.1371/journal.pone.0157260
[139]
Joan Sol Roo and Martin Hachet. 2017. One Reality: Augmenting How the PhysicalWorld is Experienced by Combining Multiple Mixed Reality Modalities. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (Québec City, QC, Canada) (UIST '17). Association for Computing Machinery, New York, NY, USA, 787--795. https://doi.org/10.1145/3126594.3126638
[140]
Inês Salselas, Rui Penha, and Gilberto Bernardes. 2021. Sound design inducing attention in the context of audiovisual immersive environments. Personal and Ubiquitous Computing 25 (2021), 737--748. https://doi.org/10.1007/s00779-020-01386--3
[141]
Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on Eye tracking research & applications. 71--78. https://doi.org/10.1145/355017.355028
[142]
Noor Seijdel, Jessica Loke, Ron Van de Klundert, Matthew Van der Meer, Eva Quispel, Simon Van Gaal, Edward HF De Haan, and H Steven Scholte. 2021. On the necessity of recurrent processing during object recognition: it depends on the need for scene segmentation. Journal of Neuroscience 41, 29 (2021), 6281--6289.
[143]
Hitesh Nidhi Sharma, Phoebe O. Toups Dugas, Ajit Jain, and Andruid Kerne. 2015. Designing to Split Attention in a Mixed Reality Game. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play (London, United Kingdom) (CHI PLAY '15). Association for Computing Machinery, New York, NY, USA, 691--696. https://doi.org/10.1145/2793107.2810289
[144]
Mansi Sharma, Maurice Rekrut, Jan Alexandersson, and Antonio Krüger. 2022. Towards Improving EEG-Based Intent Recognition in Visual Search Tasks. In International Conference on Neural Information Processing. Springer, 604--615. https://doi.org/10.1007/978--3-031--30111--7_51
[145]
Tomonari Shimomura and Takatsune Kumada. 2011. Spatial working memory load affects counting but not subitizing in enumeration. Attention, Perception, & Psychophysics 73 (2011), 1694--1709. https://doi.org/10.3758/s13414-011- 0135--5
[146]
Motofumi Shishikura, Hiroshi Tamura, and Ko Sakai. 2023. Correlation between neural responses and human perception in figure-ground segregation. Frontiers in systems neuroscience 16 (2023), 999575. https://doi.org/10.3389/ fnsys.2022.999575
[147]
Ludwig Sidenmark, Franziska Prummer, Joshua Newn, and Hans Gellersen. 2023. Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-Mounted Displays. IEEE Transactions on Visualization and Computer Graphics 29, 11 (2023), 4740--4750. https://doi.org/10.1109/TVCG.2023.3320235
[148]
Richard Skarbez, Missie Smith, and Mary C Whitton. 2021. Revisiting Milgram and Kishino's reality-virtuality continuum. Frontiers in Virtual Reality 2 (2021), 647997. https://doi.org/10.3389/frvir.2021.647997
[149]
Alexis D. Souchet, Stéphanie Philippe, Domitile Lourdeaux, and Laure Leroy. 2022. Measuring Visual Fatigue and Cognitive Load via Eye Tracking while Learning with Virtual Reality Head-Mounted Displays: A Review. International Journal of Human--Computer Interaction 38, 9 (2022), 801--824. https://doi.org/10.1080/10447318.2021.1976509
[150]
Rhaíra Helena Caetano e Souza and Eduardo Lázaro Martins Naves. 2021. Attention detection in virtual environments using EEG signals: a scoping review. frontiers in physiology 12 (2021), 727840. https://doi.org/10.3389/fphys.2021.727840
[151]
Maximilian Speicher, Brian D. Hall, and Michael Nebeling. 2019. What is Mixed Reality?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--15. https://doi.org/10.1145/3290605.3300767
[152]
Gabriela Tavares, Pietro Perona, and Antonio Rangel. 2017. The attentional drift diffusion model of simple perceptual decision-making. Frontiers in neuroscience 11 (2017), 468. https://doi.org/10.3389/fnins.2017.00468
[153]
Simon Thorpe, Denis Fize, and Catherine Marlot. 1996. Speed of processing in the human visual system. nature 381, 6582 (1996), 520--522. https://doi.org/10.1038/381520a0
[154]
Shurong Tong and Yafei Nie. 2023. Measuring Designers? Cognitive Load for Timely Knowledge Push via Eye Tracking. International Journal of Human--Computer Interaction 39, 6 (2023), 1230--1243. https://doi.org/10.1080/ 10447318.2022.2057898
[155]
Anne M. Treisman and Garry Gelade. 1980. A feature-integration theory of attention. Cognitive Psychology 12, 1 (1980), 97--136. https://doi.org/10.1016/0010-0285(80)90005--5
[156]
Lana M Trick and Zenon W Pylyshyn. 1994. Why are small and large numbers enumerated differently? A limitedcapacity preattentive stage in vision. Psychological review 101, 1 (1994), 80. https://doi.org/10.1037/0033--295X.101.1.80
[157]
Marija U'cumlic and Benjamin Blankertz. 2016. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty. Journal of Neural Engineering 13, 1 (2016), 016015. https://doi.org/10.1088/1741- 2560/13/1/016015
[158]
AV Van den Berg and EM Van Loon. 2005. An invariant for timing of saccades during visual search. Vision Research 45, 12 (2005), 1543--1555. https://doi.org/10.1016/j.visres.2004.12.018
[159]
Alessia Vozzi, Vincenzo Ronca, Pietro Aricò, Gianluca Borghini, Nicolina Sciaraffa, Patrizia Cherubino, Arianna Trettel, Fabio Babiloni, and Gianluca Di Flumeri. 2021. The sample size matters: to what extent the participant reduction affects the outcomes of a neuroscientific research. A case-study in neuromarketing field. Sensors 21, 18 (2021), 6088. https://doi.org/10.3390/s21186088
[160]
Bing-ShunWang, Xiao-JinWang, and Li-Kun Gong. 2009. The construction of a Williams design and randomization in cross-over clinical trials using SAS. Journal of statistical software 29 (2009), 1--10. https://doi.org/10.18637/jss.v029.c01
[161]
Nanjia Wang and Frank Maurer. 2022. A Design Space for Single-User Cross-Reality Applications. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (Frascati, Rome, Italy) (AVI 2022). Association for Computing Machinery, New York, NY, USA, Article 31, 5 pages. https://doi.org/10.1145/3531073.3531116
[162]
Qiuzhen Wang, Sa Yang, Manlu Liu, Zike Cao, and Qingguo Ma. 2014. An eye-tracking study of website complexity from cognitive load perspective. Decision Support Systems 62 (2014), 1--10. https://doi.org/10.1016/j.dss.2014.02.007
[163]
Yijun Wang, Xiaorong Gao, Bo Hong, Chuan Jia, and Shangkai Gao. 2008. Brain-computer interfaces based on visual evoked potentials. IEEE Engineering in medicine and biology magazine 27, 5 (2008), 64--71.
[164]
Amelia C.Warden, Christopher D. Wickens, Domenick Mifsud, Shannon Ourada, Benjamin A. Clegg, and Francisco R. Ortega. 2022. Visual Search in Augmented Reality: Effect of Target Cue Type and Location. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 66, 1 (2022), 373--377. https://doi.org/10.1177/1071181322661260
[165]
Amelia C. Warden, Christopher D. Wickens, Daniel Rehberg, Francisco R. Ortega, and Benjamin A. Clegg. 2023. Fast, Accurate, but Sometimes Too-Compelling Support: The Impact of Imperfectly Automated Cues in an Augmented Reality Head-Mounted Display on Visual Search Performance. arXiv:2303.14300
[166]
Markus A Wenzel, Jan-Eike Golenia, and Benjamin Blankertz. 2016. Classification of eye fixation related potentials for variable stimulus saliency. Frontiers in neuroscience 10 (2016), 23. https://doi.org/10.3389/fnins.2016.00023
[167]
Jacob O. Wobbrock, Leah Findlater, Darren Gergle, and James J. Higgins. 2011. The Aligned Rank Transform for Nonparametric Factorial Analyses Using Only Anova Procedures. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vancouver, BC, Canada) (CHI '11). Association for Computing Machinery, New York, NY, USA, 143--146. https://doi.org/10.1145/1978942.1978963
[168]
Jeremy M Wolfe. 2020. Visual search: How do we find what we are looking for? Annual review of vision science 6 (2020), 539--562. https://doi.org/10.1146/annurev-vision-091718-015048
[169]
Jeremy M Wolfe, Kyle R Cave, and Susan L Franzel. 1989. Guided search: an alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human perception and performance 15, 3 (1989), 419. https://doi.org/10.1037/0096--1523.15.3.419
[170]
Jeremy M Wolfe and Todd S Horowitz. 2017. Five factors that guide attention in visual search. Nature Human Behaviour 1, 3 (2017), 0058. https://doi.org/10.1038/s41562-017-0058
[171]
Jeremy M Wolfe and Michael J Van Wert. 2010. Varying target prevalence reveals two dissociable decision criteria in visual search. Current biology 20, 2 (2010), 121--124. https://doi.org/10.1016/j.cub.2009.11.066
[172]
Geoffrey F Woodman and Steven J Luck. 1999. Electrophysiological measurement of rapid shifts of attention during visual search. Nature 400, 6747 (1999), 867--869. https://doi.org/10.1038/23698
[173]
Johannes Zagermann, Ulrike Pfeil, and Harald Reiterer. 2018. Studying Eye Movements as a Basis for Measuring Cognitive Load. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI EA '18). Association for Computing Machinery, New York, NY, USA, 1--6. https://doi.org/10.1145/3170427.3188628
[174]
Lili Zhang and Michael J Murdoch. 2018. Color matching criteria in augmented reality. Journal of Perceptual Imaging 1, 1 (2018), 10506--1. https://doi.org/10.2352/J.Percept.lmaging.2018.1.1.010506
[175]
Ling Zhu and Jiufang Lv. 2023. Review of Studies on User Research Based on EEG and Eye Tracking. Applied Sciences 13, 11 (2023), 6502. https://doi.org/10.3390/app13116502

Cited By

View all
  • (2024)Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological FeaturesProceedings of the ACM on Human-Computer Interaction10.1145/36981428:ISS(377-406)Online publication date: 24-Oct-2024
  • (2024)Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345617230:11(6997-7007)Online publication date: Nov-2024

Index Terms

  1. Understanding the Impact of the Reality-Virtuality Continuum on Visual Search Using Fixation-Related Potentials and Eye Tracking Features

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue MHCI
    MHCI
    September 2024
    1136 pages
    EISSN:2573-0142
    DOI:10.1145/3697825
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 September 2024
    Published in PACMHCI Volume 8, Issue MHCI

    Check for updates

    Author Tags

    1. EEG
    2. eye tracking
    3. fixation-related potentials
    4. mixed reality
    5. physiological computing
    6. visual search

    Qualifiers

    • Research-article

    Funding Sources

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)302
    • Downloads (Last 6 weeks)156
    Reflects downloads up to 06 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Evaluating Typing Performance in Different Mixed Reality Manifestations using Physiological FeaturesProceedings of the ACM on Human-Computer Interaction10.1145/36981428:ISS(377-406)Online publication date: 24-Oct-2024
    • (2024)Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345617230:11(6997-7007)Online publication date: Nov-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media