skip to main content
10.1145/3382507.3418869acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Understanding Applicants' Reactions to Asynchronous Video Interviews Through Self-reports and Nonverbal Cues

Published: 22 October 2020 Publication History

Abstract

Asynchronous video interviews (AVIs) are increasingly used by organizations in their hiring process. In this mode of interviewing, the applicants are asked to record their responses to predefined interview questions using a webcam via an online platform. AVIs have increased usage due to employers' perceived benefits in terms of costs and scale. However, little research has been conducted regarding applicants' reactions to these new interview methods. In this work, we investigate applicants' reactions to an AVI platform using self-reported measures previously validated in psychology literature. We also investigate the connections of these measures with nonverbal behavior displayed during the interviews. We find that participants who found the platform creepy and had concerns about privacy reported lower interview performance compared to participants who did not have such concerns. We also observe weak correlations between nonverbal cues displayed and these self-reported measures. Finally, inference experiments achieve overall low-performance w.r.t. to explaining applicants' reactions. Overall, our results reveal that participants who are not at ease with AVIs (i.e., high creepy ambiguity score) might be unfairly penalized. This has implications for improved hiring practices using AVIs.

Supplementary Material

MP4 File (3382507.3418869.mp4)
Presentation video

References

[1]
Keith Anderson, Elisabeth André, Tobias Baur, Sara Bernardini, Mathieu Chollet, Evi Chryssafidou, Ionut Damian, Cathy Ennis, Arjan Egges, Patrick Gebhard, et al. 2013. The TARDIS framework: intelligent virtual agents for social coaching in job interviews. In Advances in computer entertainment. Springer.
[2]
Elena Margaret Lawrence Auer. 2018. Detecting Deceptive Impression Management Behaviors in Interviews Using Natural Language Processing. (2018).
[3]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. Openface 2.0: Facial behavior analysis toolkit. In 13th IEEE FG. IEEE.
[4]
Lisa Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix M Martinez, and Seth D Pollak. 2019. Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological science in the public interest 20, 1 (2019), 1--68.
[5]
Ligia Maria Batrinca, Nadia Mana, Bruno Lepri, Fabio Pianesi, and Nicu Sebe. 2011. Please, tell me about yourself: automatic personality assessment using short self-presentations. In Proc. 13th ACM ICMI.
[6]
Talya N Bauer, Carl P Maertz Jr, Michael R Dolen, and Michael A Campion. 1998. Longitudinal assessment of applicant reactions to employment testing and test outcome feedback. Journal of applied Psychology (1998).
[7]
Talya N Bauer, Donald M Truxillo, Rudolph J Sanchez, Jane M Craig, Philip Ferrara, and Michael A Campion. 2001. Applicant reactions to selection: Development of the selection procedural justice scale (SPJS). Personnel psychology (2001).
[8]
Cigdem Beyan, Andrea Zunino, Muhammad Shahid, and Vittorio Murino. 2019. Personality Traits Classification Using Deep Visual Activity-based Nonverbal Features of Key-Dynamic Images. IEEE Transactions on Affective Computing (2019).
[9]
Joan-Isaac Biel and Daniel Gatica-Perez. 2011. VlogSense: Conversational behavior and social attention in YouTube. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) (2011).
[10]
Joan-Isaac Biel, Vagia Tsiminaki, John Dines, and Daniel Gatica-Perez. 2013. Hi youtube!: Personality impressions and verbal content in social video. In Proc. 15th ACM ICMI. ACM.
[11]
Falko S Brenner, Tuulia M Ortner, and Doris Fay. 2019. Asynchronous video interviewing as a new technology in personnel selection. (2019).
[12]
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Martin-Raugh, Harrison Kell, Chong Min Lee, and Su-Youn Yoon. 2016. Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm. In Proc. 18th ACM ICMI. ACM.
[13]
Lei Chen, Ru Zhao, Chee Wee Leong, Blair Lehman, Gary Feng, and Mohammed Ehsan Hoque. 2017. Automated video interview judgment on a largesized corpus collected online. In 7th Int. Conf. on Affective Computing and Intelligent Interaction (ACII). IEEE.
[14]
Satoris S Culbertson, William S Weyhrauch, and Christopher J Waples. 2016. Behavioral cues as indicators of deception in structured employment interviews. International Journal of Selection and Assessment (2016).
[15]
Bella M DePaulo, James J Lindsay, Brian E Malone, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to deception. Psychological bulletin (2003).
[16]
Bella M DePaulo, Robert Rosenthal, Judith Rosenkrantz, and Carolyn Rieder Green. 1982. Actual and perceived cues to deception: A closer look at speech. Basic and Applied Social Psychology (1982).
[17]
Florian Eyben, Felix Weninger, Florian Gross, and Björn Schuller. 2013. Recent developments in opensmile, the munich open-source multimedia feature extractor. In Proc.of the 21st ACM international conference on Multimedia.
[18]
Ray J Forbes and Paul R Jackson. 1980. nonverbal behaviour and the outcome of selection interviews. J. Occupational Psychology (1980).
[19]
Stephen J Gould. 1993. Assessing Self-Concept Discrepancy in Consumer Behavior: The Joint Effect of Private Self-Consciousness and Self-Monitoring. Advances in Consumer Research (1993).
[20]
Yamur Güçlütürk, Umut Güçlü, Marcel AJ van Gerven, and Rob van Lier. 2016. Deep impression: Audiovisual deep residual networks for multimodal apparent personality trait recognition. In European Conference on Computer Vision. Springer.
[21]
Léo Hemamou, Ghazi Felhi, Jean-Claude Martin, and Chloé Clavel. 2019. Slices of Attention in Asynchronous Video Job Interviews. In 2019 8th International Conf. on Affective Computing and Intelligent Interaction (ACII). IEEE.
[22]
Léo Hemamou, Ghazi Felhi, Vincent Vandenbussche, Jean-Claude Martin, and Chloé Clavel. 2019. HireNet: A Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews. In Proc.of the AAAI Conf. on Artificial Intelligence.
[23]
Annemarie Hiemstra. 2013. Fairness in paper and video resume screening.
[24]
Mohammed Ehsan Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu, and Rosalind W Picard. 2013. Mach: My automated conversation coach. In Proc. ACM UBICOMP.
[25]
Andrew S Imada and Milton D Hakel. 1977. Influence of nonverbal communication and rater proximity on impressions and decisions in simulated employment interviews. J. Applied Psychology (1977).
[26]
Oliver P John, Sanjay Srivastava, et al. 1999. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of personality: Theory and research (1999).
[27]
M Langer and CJ König. 2017. Development of the Creepiness of Situation Scale?Study 3 convergent and divergent validity. Retrieved from osf. io/x4umb (2017).
[28]
Markus Langer, Cornelius J König, and Victoria Hemsing. 2020. Is anybody listening? The impact of automatically evaluated job interviews on impression management and applicant reactions. Journal of Managerial Psychology (2020).
[29]
Markus Langer, Cornelius J König, and Kevin Krause. 2017. Examining digital interviews for personnel selection: Applicant reactions and interviewer ratings. International journal of selection and assessment (2017).
[30]
Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle P MartinRaugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo, and Laura McCulla. 2019. To Trust, or Not to Trust? A Study of Human Bias in Automated Video Interview Assessments. arXiv preprint arXiv:1911.13248 (2019).
[31]
Julia Levashina and Michael A Campion. 2007. Measuring faking in the employment interview: development and validation of an interview faking behavior scale. Journal of applied psychology (2007).
[32]
Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schimd-Mast, and Daniel Gatica-Perez. 2016. Training on the Job: Behavioral Analysis of Job Interviews in Hospitality. In Proc. 18th ACM ICMI.
[33]
Skanda Muralidhar, Laurent Son Nguyen, and Daniel Gatica-Perez. 2018. Words Worth: Verbal Content and Hirability Impressions in YouTube Video Resumes. In Proc. of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis.
[34]
Skanda Muralidhar, Marianne Schimd-Mast, and Daniel Gatica-Perez. 2017. How May I Help You? Behavior and Impressions in Hospitality Service Encounters. In Proc. 19th ACM ICMI.
[35]
Skanda Muralidhar, Rémy Siegfried, Jean-Marc Odobez, and Daniel Gatica-Perez. 2018. Facing Employers and Customers: What Do Gaze and Expressions Tell About Soft Skills?. In Proceedings of the 17th international conference on mobile and ubiquitous multimedia. 121--126.
[36]
Nils Myszkowski, Martin Storme, Franck Zenasni, and Todd Lubart. 2014. Appraising the duality of self-monitoring: Psychometric qualities of the Revised Self-Monitoring Scale and the Concern for Appropriateness Scale in French. Canadian Journal of Behavioural Science/Revue canadienne des sciences du comportement (2014).
[37]
Iftekhar Naim, M Iftekhar Tanveer, Daniel Gildea, and Mohammed Ehsan Hoque. 2015. Automated prediction and analysis of job interview performance: The role of what you say and how you say it. Proc. IEEE FG (2015).
[38]
Laurent Son Nguyen, Denise Frauendorfer, Marianne Schmid Mast, and Daniel Gatica-Perez. 2014. Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE Trans. on Multimedia (2014).
[39]
Laurent Son Nguyen and Daniel Gatica-Perez. 2016. Hirability in the wild: Analysis of online conversational video resumes. IEEE Transactions on Multimedia (2016).
[40]
Olivia A O'Neill and Charles A O'Reilly III. 2011. Reducing the backlash effect: Self-monitoring and women's promotions. Journal of Occupational and Organizational Psychology (2011).
[41]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research (2011).
[42]
Odile Plaisant, Robert Courtois, Christian Réveillère, GA Mendelsohn, and OP John. 2010. Validation par analyse factorielle du Big Five Inventory français (BFIFr). Analyse convergente avec le NEO-PI-R. In Annales Médico-psychologiques, revue psychiatrique. Elsevier.
[43]
Denise Potosky. 2008. A conceptual framework for the role of the administration medium in the personnel assessment process. Academy of Management Review (2008).
[44]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
[45]
Sowmya Rasipuram and Dinesh Babu Jayagopi. 2016. Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study. In Proc. of 18th ACM ICMI.
[46]
Nicolas Roulin, Adrian Bangerter, and Julia Levashina. 2014. Interviewers? perceptions of impression management in employment interviews. Journal of Managerial Psychology (2014).
[47]
Ann Marie Ryan and Robert E Ployhart. 2000. Applicants? perceptions of selection procedures and decisions: A critical review and agenda for the future. Journal of management 26, 3 (2000), 565--606.
[48]
Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What does it mean to?solve?the problem of discrimination in hiring? social, technical and legal perspectives from the UK on automated hiring systems. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
[49]
Leann Schneider, Deborah M Powell, and Nicolas Roulin. 2015. Cues to deception in the employment interview. International Journal of Selection and Assessment 23, 2 (2015), 182--190.
[50]
Björn Schuller, Stefan Steidl, Anton Batliner, Alessandro Vinciarelli, Klaus Scherer, Fabien Ringeval, Mohamed Chetouani, Felix Weninger, Florian Eyben, Erik Marchi, et al. 2013. The INTERSPEECH 2013 computational paralinguistics challenge: Social signals, conflict, emotion, autism. In INTERSPEECH.
[51]
Rémy Siegfried, Yu Yu, and Jean-Marc Odobez. 2017. Towards the use of social interaction conventions as prior for gaze model adaptation. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 154--162.
[52]
Mark Snyder. 1979. Self-monitoring processes. In Advances in experimental social psychology. Elsevier.
[53]
Eugene F Stone-Romero, Dianna L Stone, and David Hyatt. 2003. Personnel selection procedures and invasion of privacy. Journal of Social Issues (2003).
[54]
Hung-Yue Suen, Kuo-En Hung, and Chien-Liang Lin. 2019. TensorFlow-based automatic personality recognition used in asynchronous video interviews. IEEE Access (2019).
[55]
Omer Tene and Jules Polonetsky. 2014. A theory of creepy: Technology, privacy, and shifting social norms. Yale Journal of Law and Technology (2014).
[56]
Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling. 2015. Rendering of eyes for eye-shape registration and gaze estimation. In Proc. of the IEEE International Conf. on Computer Vision.

Cited By

View all

Index Terms

  1. Understanding Applicants' Reactions to Asynchronous Video Interviews Through Self-reports and Nonverbal Cues

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
      October 2020
      920 pages
      ISBN:9781450375818
      DOI:10.1145/3382507
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 October 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. multimodal interaction
      2. nonverbal behavior
      3. self-reported measures
      4. social computing
      5. video interviews

      Qualifiers

      • Research-article

      Funding Sources

      • Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung

      Conference

      ICMI '20
      Sponsor:
      ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
      October 25 - 29, 2020
      Virtual Event, Netherlands

      Acceptance Rates

      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)88
      • Downloads (Last 6 weeks)7
      Reflects downloads up to 03 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Do We Need To Watch It All? Efficient Job Interview Video Processing with Differentiable MaskingProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685718(565-574)Online publication date: 4-Nov-2024
      • (2024)Tell Me More! Examining the Benefits of Adding Structured Probing in Asynchronous Video InterviewsInternational Journal of Selection and Assessment10.1111/ijsa.1251433:1Online publication date: 17-Dec-2024
      • (2024)Automated Scoring of Asynchronous Interview Videos Based on Multi-Modal Window-Consistency FusionIEEE Transactions on Affective Computing10.1109/TAFFC.2023.329433515:3(799-814)Online publication date: Jul-2024
      • (2023)Does media richness influence job applicants' experience in asynchronous video interviews? Examining social presence, impression management, anxiety, and performanceInternational Journal of Selection and Assessment10.1111/ijsa.1244832:1(54-68)Online publication date: 11-Aug-2023
      • (2023)“The interviewer is a machine!” Investigating the effects of conventional and technology‐mediated interview methods on interviewee reactions and behaviorInternational Journal of Selection and Assessment10.1111/ijsa.1243331:3(403-419)Online publication date: 8-May-2023
      • (2023)Deciphering the Role of Artificial Intelligence in Health Care, Learning and DevelopmentThe Adoption and Effect of Artificial Intelligence on Human Resources Management, Part B10.1108/978-1-80455-662-720230010(149-179)Online publication date: 10-Feb-2023
      • (2023)Ready? Camera rolling… action! Examining interviewee training and practice opportunities in asynchronous video interviewsJournal of Vocational Behavior10.1016/j.jvb.2023.103912(103912)Online publication date: Jul-2023
      • (2022)Understanding Interviewees’ Perceptions and Behaviour towards Verbally and Non-verbally Expressive Virtual Interviewing AgentsCompanion Publication of the 2022 International Conference on Multimodal Interaction10.1145/3536220.3558802(61-69)Online publication date: 7-Nov-2022

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media