skip to main content
10.1145/3126594.3126640acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

More than a Feeling: The MiFace Framework for Defining Facial Communication Mappings

Published: 20 October 2017 Publication History

Abstract

Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements. In traditional mapping paradigms using posed photographs, naïve judges select from predetermined label sets and movements are inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.

Supplementary Material

TXT File (uistf3635.txt)
.txt file for information
ZIP File (uistf3635.zip)
See uistf3635.txt file for information

References

[1]
Paul André, Aniket Kittur, and Steven P. Dow. 2014. Crowd synthesis: Extracting categories and clusters from complex data. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '14). ACM Press, New York, NY, 989--998.
[2]
Satanjeev Banerjee and Ted Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In Proceedings of the Third International Conference on Computational Linguistics and Intelligent Text Processing (CICLing '02). Springer-Verlag, London, UK, 136--145.
[3]
Satanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the 18th international joint conference on Artificial intelligence (IJCAI'03). Morgan Kaufmann Publishers Inc., San Francisco, CA, 805-- 810.
[4]
Marian Stewart Bartlett, Joseph C. Hager, Paul Ekman, and Terrence J. Sejnowski. 1999. Measuring facial expressions by computer image analysis. Psychophysiology 36, 2 (Mar. 1999), 253--263.
[5]
C. Fabian Benitez-Quiroz, Ronnie B. Wilbur, and Aleix M. Martinez. 2016. The not face: A grammaticalization of facial expressions of emotion. Cognition 150 (May 2016), 77--84.
[6]
Winslow Burleson and Rosalind Picard. 2007. Evidence for gender specific approaches to the development of emotionally intelligent learning companions. IEEE Intell. Syst. 22, 4 (Aug. 2007), 62-- 69.
[7]
Hiram Calvo, Oscar Méndez, and Marco A. Moreno Armendáriz. 2016. Integrated concept blending with vector space models. Comput. Speech Lang. 40 (Nov. 2016), 79--96.
[8]
Justin Cheng and Michael S. Bernstein. 2015. Flock: Hybrid crowd-machine learning classifiers. In Proceedings of the 2015 ACM International Conference on Computer-Supported Cooperative Work and Social Computing (CSCW '15). ACM Press, New York, NY, 600--611.
[9]
Lydia B. Chilton, Greg Little, Darren Edge, Daniel S. Weld, and James A. Landay. 2013. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the 31st Annual CHI Conference on Human Factors in Computing Systems (CHI '13). ACM Press, New York, NY, 1999--2008.
[10]
Jeffrey F. Cohn, Adena J. Zlochower, James Lien, and Takeo Kanade. 1999. Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. Psychophysiology 36, 1 (Jan. 1999), 35--43.
[11]
Darren Cosker, Eva Krumhuber, and Adrian Hilton. 2011. A FACS valid 3D dynamic action unit database with applications to 3D dynamic morphable facial modeling. In Proceedings of the IEEE International Conference on Computer Vision (ICCV '11). IEEE Computer Society, Los Alamitos, CA, 2296--2303.
[12]
Darren Cosker, Eva Krumhuber, and Adrian Hilton. 2010. Perception of linear and nonlinear motion properties using a FACS validated 3D facial model. In Proceedings of the Symposium on Applied Perception in Graphics and Visualization (APGV '10). ACM Press, New York, NY, 101--108.
[13]
Charles Darwin and Paul Ekman. 2009. The expression of the emotions in man and animals. Oxford University Press, New York, NY.
[14]
Deepak P. and Prasad M. Deshpande. 2015. Operators for similarity search: Semantics, techniques and usage scenarios, Springer International Publishing, Cham, Switzerland.
[15]
Shichuan Du, Yong Tao, and Aleix M. Martinez. 2014. Compound facial expressions of emotion. In Proceedings of the National Academy of Sciences (PNAS '14). National Academy of Sciences, Washington, DC, E1454--E1462.
[16]
Paul Ekman. 1994. All emotions are basic. In The Nature of Emotion: Fundamental Questions. Oxford University Press, New York, NY, 15--19.
[17]
Paul Ekman, Wallace V. Friesen, and Joseph C. Hager. 2002. A human face. In Facial Action Coding System: The manual on CD ROM.
[18]
Clarence (Skip) Ellis and Paulo Barthelmess. 2003. The neem dream. In Proceedings of the 2003 conference on diversity in computing (TAPIA '03). ACM Press, New York, NY, 23--29.
[19]
Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding topic signals in largescale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM Press, New York, NY, 4647--4657.
[20]
Beverley Fehr and James A. Russell. 1984. Concept of emotion viewed from a prototype perspective. J. Exp. Psychol Gen. 113, 3 (Sept. 1984), 464--486.
[21]
Gretchen N. Foley and Julie P. Gentile. 2010. Nonverbal communication in psychotherapy. Psychiatry (Edgmont) 7, 6 (Jun. 2010), 38--44.
[22]
Nico H. Frijda. 1987. Emotion, cognitive structure, and action tendency. Cognition & Emotion 1, 2 (Apr. 1987), 115--143.
[23]
Takeo Fujiwara, Rie Mizuki, Takahiro Miki, and Claude Chemtob. 2015. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: A pilot study. Front. in Psychol. 6 (Oct. 2015).
[24]
Jeffrey M. Girard, Jeffrey F. Cohn, Laszlo A. Jeni, Michael A. Sayette, and Fernando De la Torre. 2015. Spontaneous facial expression in unscripted social interactions can be measured automatically. Behav. Res. Methods 47, 4 (Dec. 2015), 1136--1147.
[25]
Steven L. Gordon. 1981. The sociology of sentiments and emotion. Social psychology: sociological perspectives. Transaction Publishers, New Brunswick, NJ, 562--592.
[26]
Jonathan Haidt and Dacher Keltner. 1999. Culture and facial expression: Open-ended methods find more expressions and a gradient of recognition. Cognition & Emotion 13, 3 (May 1999), 225--266.
[27]
Shlomo Hareli and Ursula Hess. 2012. The social signal value of emotions. Cognition & Emotion 26, 3 (Apr. 2012), 385--389.
[28]
Arlie Russell Hochschild. 2012. The managed heart: commercialization of human feeling, Univ. of CA Press, Berkeley, CA.
[29]
Rachael E. Jack, Oliver G. B.Garrod, Hui Yu, Roberto Caldara, and Philippe G. Schyns. 2012. Facial expressions of emotion are not culturally universal. In Proceedings of the National Academy of Sciences (PNAS '12). National Academy of Sciences, Washington, DC, 7241--7244.
[30]
Philip L. Jackson, Pierre-Emmanuel Michon, Erik Geslin, Maxime Carignan, and Danny Beaudoin. 2015. EEVEE: The empathy-enhancing virtual evolving environment. Front Hum Neurosci 9 (Mar. 2015).
[31]
Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '13). ACM Press, New York, NY, 1301--1317.
[32]
Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. IEEE Trans. Affective Comput. 4, 1 (Jan. 2013), 15--33.
[33]
Nicholas Kong, Marti A. Hearst, and Maneesh Agrawala. 2014. Extracting references between text and charts via crowdsourcing. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems (CHI '14). ACM Press, New York, NY, 31--40.
[34]
Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time crowd labeling for deployable activity recognition. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '13). ACM Press, New York, NY, 1203--1212.
[35]
Hao Li, Jihun Yu, Yuting Ye, and Chris Bregler. 2013. Realtime facial animation with on-the-fly correctives. ACM T. Graphics 32, 4 (Jul. 2013), 1.
[36]
Wei Lu, Yuanyuan Cai, Xiaoping Che, and Yuxun Lu. 2016. Joint semantic similarity assessment with raw corpus and structured ontology for semantic-oriented service discovery. Personal and Ubiquitous Computing 20, 3 (May 2016), 311--323.
[37]
Gale M. Lucas, Jonathan Gratch, Aisha King, and Louis-Philippe Morency. 2014. Research report: It's only a computer: Virtual humans increase willingness to disclose. Comput. Hum. Behav. 37 (Aug. 2014), 94-- 100.
[38]
Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, and Iain Matthews. 2010. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops (CVPRW '10). IEEE Computer Society, Los Alamitos, CA, 94--101.
[39]
Catherine Lutz. 1988. Unnatural emotions: Everyday sentiments on a Micronesian atoll & their challenge to western theory, University of Chicago Press, Chicago, IL.
[40]
Daniel McDuff, Rana El Kaliouby, and Rosalind W. Picard. 2012. Crowdsourcing facial responses to online videos. IEEE Trans. Affective Comput. 3, 4 (Jan. 2012), 456--468.
[41]
Seiko Minoshita, Nobuaki Morita, Toshiyuki Yamashita, Maiko Yoshikawa, Tadashi Kikuchi, and Shinji Satoh. 2005. Recognition of affect in facial expression using the Noh Mask Test: Comparison of individuals with schizophrenia and normal controls. Psychiat. Clin. Neuros. 59, 1 (Feb. 2005), 4--10.
[42]
Saif M. Mohammad and Peter D. Turney. 2010. Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text (CAAGET '10). Association for Computational Linguistics, Stroudsburg, PA, 26--34.
[43]
Magalie Ochs, Catherine Pelachaud, and Gary Mckeown. 2017. A user perception-based approach to create smiling embodied conversational agents. ACM Trans Interact. Intell. Syst. 7, 1 (Jan. 2017), 1--33.
[44]
Doris Peham et al. 2015. Facial affective behavior in mental disorder. J. Nonverbal Behav. 39, 4 (Dec. 2015), 371--396.
[45]
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation, version 1.2. (2014). Retrieved October 16, 2016 from http://nlp.stanford.edu/projects/glove/
[46]
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP '14). Association for Computational Linguistics, 1532--1543.
[47]
Robert Plutchik. 2001. The nature of emotions. Am. Sci. 89, 4 (Jul. 2001), 344--350.
[48]
Marie Postma-Nilsenová, Eric Postma, and Kiek Tates. 2015. Automatic detection of confusion in elderly users of a web-based health instruction video. Telemed. J. e-Health 21, 6 (Jun. 2015), 514--519.
[49]
Princeton University. 2015. About WordNet. (Mar. 2015). Retrieved November 10, 2016 from https://wordnet.princeton.edu/
[50]
Etienne B. Roesch, Lucas Tamarit, Lionel Reveret, Didier Grandjean, David Sander, and Klaus R. Scherer. 2011. FACSGen: A tool to synthesize emotional facial expressions through systematic manipulation of facial action units. J. Nonverbal Behav. 35, 1 (Mar. 2011), 1-- 16.
[51]
James A. Russell. 1995. Facial expressions of emotion: What lies beyond minimal universality? Psychol. Bull. 118, 3 (Nov. 1995), 379--391.
[52]
James A. Russell. 1994. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 1 (Feb. 1994), 102--141.
[53]
Saba Safdar, Wolfgang Friedlmeier, David Matsumoto, Seung Hee Yoo, Catherine T. Kwantes, and Hisako Kakai. 2009. Variations of emotional display rules within and across cultures: A comparison between Canada, USA, and Japan. Can. J. of Behav. Sci. 41, 1 (Jan. 2009), 1--10.
[54]
Evangelos Sariyanidi, Hatice Gunes, and Andrea Cavallaro. 2015. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37, 6 (Jun. 2015), 1113--1133.
[55]
José Serra, Verónica Orvalho, and Darren Cosker. 2016. Behavioural facial animation using motion graphs and mind maps. In Proceedings of the 9th International Conference on Motion in Games (MIG '16). ACM, New York, NY, 161--166.
[56]
Phillip Shaver, Judith Schwartz, Donald Kirson, and Cary O'Connor. 1987. Emotion knowledge: Further exploration of a prototype approach. J. Pers. Soc. Psychol. 52, 6 (Jun. 1987), 1061--1086.
[57]
Judith Sinzig, Dagmar Morsch, and Gerd Lehmkuhl. 2008. Do hyperactivity, impulsivity and inattention have an impact on the ability of facial affect recognition in children with autism and ADHD? Eur. Child Adolesc. Psychiatry 17, 2 (Mar. 2008), 63--72.
[58]
Petr Slovák and Geraldine Fitzpatrick. 2015. Teaching and Developing Social and Emotional Skills with Technology. ACM Trans. Comput.-Hum. Interact. 22, 4 (Jun. 2015), 1--34.
[59]
Andreas Sonderegger, Klaus Heyden, Alain Chavaillaz, and Juergen Sauer. 2016. AniSAM & AniAvatar: Animated visualizations of affective states. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM Press, New York, NY, 4828--4837.
[60]
Matteo Sorci, David McCallum, and Alastair Gordon. 2011. Say it to my face! In Proceedings of 2011 Australian Market & Social Research Society National Conference (AMSRS '11). AMSRS, Sydney, Australia, 1--21.
[61]
Carlo Strapparava, Alessandro Valitutti, and Oliviero Stock. 2006. The affective weight of lexicon. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC '06). 423426.
[62]
Alexander Todorov, Ron Dotsch, Jenny M. Porter, Nikolaas N. Oosterhof, and Virginia B. Falvello. 2013. Validation of data-driven computational models of social perception of faces. Emotion 13, 4 (Aug. 2013), 724--738.
[63]
Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Int. Res. 37 (Mar. 2010), 141--188.
[64]
Alessandro Vinciarelli, Maja Pantic, Dirk Heylen, Catherine Pelachaud, Isabella Poggi, Francesca D'Errico, and Marc Schroeder. 2012. Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Trans. Affective Comput. 3, 1 (Jan. 2012), 69--87.
[65]
Justin Walden, Eun Hwa Jung, S. Shyam Sundar, and Ariel Celeste Johnson. 2015. Mental models of robots among senior citizens: An interview study of interaction expectations and design implications. Interact. Stud. 16, 1 (Aug. 2015), 68--88.
[66]
Thibaut Weise, Sofien Bouaziz, Hao Li, and Mark Pauly. 2011. Realtime performance-based facial animation. In Proceedings of the International Conference on Computer Graphics & Interactive Techniques (SIGGRAPH '11). ACM Press, New York, NY, 1--10.
[67]
Jacob Whitehill, Zewelanji Serpell, Yi-Ching Lin, Aysha Foster, and Javier R. Movellan. 2014. The faces of engagement: Automatic recognition of student engagement from facial expressions. IEEE T. Affect. Comput. 5, 1 (Jan. 2014), 86--98.
[68]
Marzanna Wiechetek Ostos, Françoise Schenk, Tania Baenziger, and Armin von Gunten. 2011. An exploratory study on facial emotion recognition capacity in beginning Alzheimer's disease. Eur. Neurol. 65, 6 (Jun. 2011), 361--367.
[69]
Hui Yu, Oliver G.B. Garrod, and Philippe G. Schyns. 2012. Perception-driven facial expression synthesis. Comput. Graph. 36, 3 (May 2012), 152--162.
[70]
Zhihong Zeng, Maja Pantic, Glenn I. Roisman, and Thomas S. Huang. 2009. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1 (Mar. 2008), 39--58.
[71]
Shuo Zhou, Timothy Bickmore, Michael PaascheOrlow, and Brian Jack. 2014. Agent-user concordance and satisfaction with a virtual hospital discharge nurse. In Proceedings of the 14th International Conference on Intelligent Virtual Agents (IVA '14). Lecture Notes in Computer Science. Springer Verlag, Berlin, Germany, 528--541.

Cited By

View all

Index Terms

  1. More than a Feeling: The MiFace Framework for Defining Facial Communication Mappings

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    UIST '17: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology
    October 2017
    870 pages
    ISBN:9781450349819
    DOI:10.1145/3126594
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 October 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. 3D modeling
    2. affective computing
    3. avatars
    4. facial expression recognition
    5. natural language processing
    6. social signal processing
    7. virtual humans

    Qualifiers

    • Research-article

    Conference

    UIST '17

    Acceptance Rates

    UIST '17 Paper Acceptance Rate 73 of 324 submissions, 23%;
    Overall Acceptance Rate 561 of 2,567 submissions, 22%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)10
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 07 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media