skip to main content
10.1145/3382507.3418814acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions

Published: 22 October 2020 Publication History

Abstract

Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate* [label=(\arabic*)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate*. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.

Supplementary Material

MP4 File (3382507.3418814.mp4)
Presentation of the paper "Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions" given by Bernd Dudzik.

References

[1]
Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In Lrec, Vol. 10. 2200--2204.
[2]
Tadas Baltrusaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency. 2018. OpenFace 2.0: Facial Behavior Analysis Toolkit. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 59--66. https://doi.org/10.1109/FG.2018.00019
[3]
Lisa Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix M. Martinez, and Seth D. Pollak. 2019. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest 20, 1 (jul 2019), 1--68. https://doi.org/10.1177/1529100619832930
[4]
Anne Bartsch. 2012. Emotional Gratification in Entertainment Experience. Why Viewers of Movies and Television Series Find it Rewarding to Experience Emotions. Media Psychology 15, 3 (jul 2012), 267--302. https://doi.org/10.1080/15213269.2012.693811
[5]
Amy M. Belfi, Brett Karlan, and Daniel Tranel. 2016. Music evokes vivid autobiographical memories. Memory 24, 7 (aug 2016), 979--989. https://doi.org/10.1080/ 09658211.2015.1061012
[6]
Subhabrata Bhattacharya, Behnaz Nojavanasghari, Tao Chen, Dong Liu, Shih-Fu Chang, and Mubarak Shah. 2013. Towards a comprehensive computational model foraesthetic assessment of videos. In Proceedings of the 21st ACM international conference on Multimedia - MM '13. ACM Press, New York, New York, USA, 361--364. https://doi.org/10.1145/2502081.2508119
[7]
Susan Bluck, Nicole Alea, Tilmann Habermas, and David C. Rubin. 2005. A TALE of Three Functions: The Self--Reported Uses of Autobiographical Memory. Social Cognition 23, 1 (feb 2005), 91--117. https://doi.org/10.1521/soco.23.1.91.59198
[8]
Felipe Bravo-Marquez, Eibe Frank, Saif M. Mohammad, and Bernhard Pfahringer. 2016. Determining Word-Emotion Associations from Tweets by Multi-label Classification. In 2016 IEEE/WIC/ACM International Conference on Web Intelligence (WI). IEEE, 536--539. https://doi.org/10.1109/WI.2016.0091
[9]
Joost Broekens. 2012. In Defense of Dominance. International Journal of Synthetic Emotions 3, 1 (jan 2012), 33--42. https://doi.org/10.4018/jse.2012010103
[10]
Joost Broekens and Willem-Paul Brinkman. 2013. AffectButton: A method for reliable and valid affective self-report. International Journal of Human-Computer Studies 71, 6 (jun 2013), 641--667. https://doi.org/10.1016/j.ijhcs.2013.02.003
[11]
Barbara Caci, Maurizio Cardaci, and Silvana Miceli. 2019. Autobiographical memory, personality, and Facebook mementos. Europe's Journal of Psychology 15, 3 (sep 2019), 614--636. https://doi.org/10.5964/ejop.v15i3.1713
[12]
Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019. SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. In Proceedings of the 13th International Workshop on Semantic Evaluation. Association for Computational Linguistics, Stroudsburg, PA, USA, 39--48. https://doi.org/10.18653/v1/S19--2005
[13]
Tao Chen, Damian Borth, Trevor Darrell, and Shih-Fu Chang. 2014. DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks. (oct 2014). arXiv:1410.8586
[14]
Yoonjung Choi and Janyce Wiebe. 2014. +/-EffectWordNet: Sense-level Lexicon Acquisition for Opinion Inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Stroudsburg, PA, USA, 1181--1191. https://doi.org/10.3115/v1/D14--1125
[15]
François Chollet and Others. 2015. Keras. \url{https://keras.io}.
[16]
Maximilian Christ, Nils Braun, Julius Neuffer, and Andreas W. Kempa-Liehr. 2018. Time Series FeatuRe Extraction on basis of Scalable Hypothesis tests (tsfresh -- A Python package). Neurocomputing 307 (sep 2018), 72--77. https://doi.org/10.1016/j.neucom.2018.03.067
[17]
Dan Cosley, Victoria Schwanda Sosik, Johnathon Schultz, S Tejaswi Peesapati, Soyoung Lee, Tejaswi Peesapati, and Soyoung Lee. 2012. Experiences With Designing Tools for Everyday Reminiscing. Human--Computer Interaction Volume 27, July (2012), 175--198. https://doi.org/10.1080/07370024.2012.656047
[18]
Emmanuel Dellandréa, Martijn Huigsloot, Liming Chen, Yoann Baveye, Zhongzhe Xiao, and Mats Sjöberg. [n.d.]. The MediaEval 2018 emotional impact of Movies task. CEUR Workshop Proceedings ([n. d.]), 1--3.
[19]
Sidney K. D'mello and Jacqueline Kory. 2015. A Review and Meta-Analysis of Multimodal Affect Detection Systems. Comput. Surveys 47, 3 (apr 2015), 1--36. https://doi.org/10.1145/2682899
[20]
Bernd Dudzik, Joost Broekens, Mark Neerincx, and Hayley Hung. 2020. A Blast From the Past: Personalizing Predictions of Video-Induced Emotions using Personal Memories as Context. arXiv:2008.12096
[21]
Bernd Dudzik, Hayley Hung, Mark Neerincx, and Joost Broekens. 2018. Artificial Empathic Memory. In Proceedings of the 2018 Workshop on Understanding Subjective Attributes of Data, with the Focus on Evoked Emotions - EE-USAD'18. ACM Press, New York, New York, USA, 1--8. https://doi.org/10.1145/3267799.3267801
[22]
Bernd Dudzik, Hayley Hung, Mark Neerincx, and Joost Broekens. 2020. Investigating the Influence of Personal Memories on Video-Induced Emotions. In Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. ACM, New York, NY, USA, 53--61. https://doi.org/10.1145/3340631.3394842
[23]
Bernd Dudzik, Michel-Pierre Jansen, Franziska Burger, Frank Kaptein, Joost Broekens, Dirk K.J. Heylen, Hayley Hung, Mark A. Neerincx, and Khiet P. Truong. 2019. Context in Human Emotion Perception for Automatic Affect Detection: A Survey of Audiovisual Databases. In 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 206--212. https://doi.org/10.1109/ACII.2019.8925446
[24]
Venkatesh Duppada, Royal Jain, and Sushant Hiray. 2018. SeerNet at SemEval2018 Task 1: Domain Adaptation for Affect in Tweets. Proceedings of The 12th International Workshop on Semantic Evaluation (apr 2018), 18--23. arXiv:1804.06137
[25]
Damien Dupré, Eva G. Krumhuber, Dennis Küster, and Gary J. McKeown. 2020. A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLOS ONE 15, 4 (apr 2020), e0231968. https://doi.org/10.1371/journal.pone.0231968
[26]
José Miguel Fernández-Dols and Carlos Crivelli. 2013. Emotion and expression: Naturalistic studies. Emotion Review 5, 1 (2013), 24--29. https://doi.org/10.1177/1754073912457229
[27]
Andy Field, Jeremy Miles, and Zoë Field. 2012. Discovering statistics using R. Sage publications.
[28]
Agneta H. Fischer and Antony S. R. Manstead. 2008. Social functions of emotion. In Handbook of emotions (3rd ed.) (3 ed.), M Lewis, J M Haviland-Jones, and L F Barrett (Eds.). Guilford Press, New York, NY, US, Chapter 28, 456--468.
[29]
Zakia Hammal and Merlin Teodosia Suarez. 2015. Towards context based affective computing introduction to the third international CBAR 2015 workshop. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 1--2. https://doi.org/10.1109/FG.2015.7284841
[30]
Alan Hanjalic and Li-Qun Xu. 2005. Affective video content representation and modeling. IEEE Transactions on Multimedia 7, 1 (feb 2005), 143--154. https://doi.org/10.1109/TMM.2004.840618
[31]
Ursula Hess and Shlomo Hareli. 2015. The influence of context on emotion recognition in humans. In 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 1--6. https://doi.org/10.1109/FG.2015.7284842
[32]
Franziska Hirt, Egon Werlen, Ivan Moser, and Per Bergamin. 2019. Measuring emotions during learning: lack of coherence between automated facial emotion recognition and emotional experience. Open Computer Science 9, 1 (jan 2019), 308--317. https://doi.org/10.1515/comp-2019-0020
[33]
Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '04. ACM Press, New York, New York, USA, 168. https://doi.org/10.1145/1014052.1014073
[34]
C J Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the 8th International Conference on Weblogs and Social Media, ICWSM 2014. 216--225.
[35]
Petr Janata, Stefan T. Tomic, and Sonja K. Rakowski. 2007. Characterisation of music-evoked autobiographical memories. Memory 15, 8 (nov 2007), 845--860. https://doi.org/10.1080/09658210701734593
[36]
S. Koelstra, C. Muhl, M. Soleymani, Jong-Seok Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras. 2012. DEAP: A Database for Emotion Analysis ;Using Physiological Signals. IEEE Transactions on Affective Computing 3, 1 (jan 2012), 18--31. https://doi.org/10.1109/T-AFFC.2011.15
[37]
Ronak Kosti, Jose M Alvarez, Adria Recasens, and Agata Lapedriza. 2017. EMOTIC: Emotions in Context Dataset. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Vol. 2017-July. 2309--2317. https://doi.org/10.1109/CVPRW.2017.285
[38]
Ronak Kosti, Jose M Alvarez, Adria Recasens, and Agata Lapedriza. 2017. Emotion Recognition in Context. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1960--1968. https://doi.org/10.1109/CVPR.2017.212
[39]
Peter J Lang, Margaret M Bradley, Bruce N Cuthbert, and Others. 1997. International affective picture system (IAPS): Technical manual and affective ratings. NIMH Center for the Study of Emotion and Attention 1 (1997), 39--58.
[40]
Jana Machajdik and Allan Hanbury. 2010. Affective image classification using features inspired by psychology and art theory. In Proceedings of the international conference on Multimedia - MM '10. ACM Press, New York, New York, USA, 83. https://doi.org/10.1145/1873951.1873965
[41]
Andreas Marpaung and Avelino Gonzalez. 2017. Can an affect-sensitive system afford to be context independent?. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 10257 LNAI. Springer, Cham, 454--467. https://doi.org/10.1007/978--3--31957837--8_38
[42]
David Matsumoto and Hyi Sung Hwang. 2010. Judging Faces in Context. Social and Personality Psychology Compass 4, 6 (jun 2010), 393--402. https://doi.org/10.1111/j.1751--9004.2010.00271.x
[43]
Daniel G. McDonald, Melanie A. Sarge, Shu-Fang Lin, James G. Collier, and Bridget Potocki. 2015. A Role for the Self: Media Content as Triggers for Involuntary Autobiographical Memories. Communication Research 42, 1 (feb 2015), 3--29. https://doi.org/10.1177/0093650212464771 ICMI '20, October 25--29, 2020, Virtual event, Netherlands Dudzik et al.
[44]
Daniel McDuff and Mohammad Soleymani. 2017. Large-scale Affective Content Analysis: Combining Media Content Features and Facial Reactions. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). IEEE, 339--345. https://doi.org/10.1109/FG.2017.49
[45]
Albert Mehrabian. 1996. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament. Current Psychology 14, 4 (dec 1996), 261--292. https://doi.org/10.1007/BF02686918
[46]
Juan Abdon Miranda-Correa, Mojtaba Khomami Abadi, Nicu Sebe, and Ioannis Patras. 2017. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups. Expert Systems with Applications 39, 16 (feb 2017), 12378--12388. https://doi.org/10.1016/j.eswa.2012.04.084 arXiv:1702.02510
[47]
Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 English words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 174--184.
[48]
Saif Mohammad and Peter Turney. 2010. Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon. In Proceedings of the {NAACL} {HLT} 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. Association for Computational Linguistics, Los Angeles, CA, 26--34.
[49]
Saif M. Mohammad. 2017. Word Affect Intensities. LREC 2018 - 11th International Conference on Language Resources and Evaluation (apr 2017), 174--183. arXiv:1704.08798 http://arxiv.org/abs/1704.08798
[50]
Saif M. Mohammad and Svetlana Kiritchenko. 2015. Using Hashtags to Capture Fine Emotion Categories from Tweets. Computational Intelligence 31, 2 (may 2015), 301--326. https://doi.org/10.1111/coin.12024
[51]
Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets. *SEM 2013 - 2nd Joint Conference on Lexical and Computational Semantics 2 (aug 2013), 321--327. arXiv:1308.6242
[52]
Finn Årup Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. CEUR Workshop Proceedings 718 (mar 2011), 93--98. arXiv:1103.2903
[53]
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, and Others. 2011. Scikit-learn: Machine learning in Python. Journal of machine learning research 12, Oct (2011), 2825--2830.
[54]
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics, Stroudsburg, PA, USA, 1532--1543. https://doi.org/10.3115/v1/D14--1162
[55]
Bernard Rimé, Susanna Corsini, and Gwénola Herbette. 2002. Emotion, verbal expression, and the social sharing of emotion. The verbal communication of emotions: Interdisciplinary perspectives (2002), 185--208.
[56]
Philipp V. Rouast, Marc Adam, and Raymond Chiong. 2018. Deep Learning for Human Affect Recognition: Insights and New Developments. IEEE Transactions on Affective Computing (2018). https://doi.org/10.1109/TAFFC.2018.2890471
[57]
Björn Schuller, Stefan Steidl, Anton Batliner, Felix Burkhardt, Laurence Devillers, Christian Müller, and Shrikanth S Narayanan. 2010. The INTERSPEECH 2010 paralinguistic challenge. In Eleventh Annual Conference of the International Speech Communication Association.
[58]
Michael James Scott, Sharath Chandra Guntuku, Weisi Lin, and Gheorghita Ghinea. 2016. Do Personality and Culture Influence Perceived Video Quality and Enjoyment? IEEE Transactions on Multimedia 18, 9 (sep 2016), 1796--1807. https://doi.org/10.1109/TMM.2016.2574623
[59]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings. International Conference on Learning Representations, ICLR. arXiv:1409.1556
[60]
Mohammad Soleymani, Martha Larson, Thierry Pun, and Alan Hanjalic. 2014. Corpus Development for Affective Video Indexing. IEEE Transactions on Multimedia 16, 4 (jun 2014), 1075--1089. https://doi.org/10.1109/TMM.2014.2305573 arXiv:1211.5492
[61]
Mohammad Soleymani, Jeroen Lichtenauer, Thierry Pun, and Maja Pantic. 2012. A multimodal database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing 3, 1 (jan 2012), 42--55. https://doi.org/10.1109/T-AFFC.2011.25
[62]
Jennifer J. Sun, Ting Liu, Alan S. Cowen, Florian Schroff, Hartwig Adam, and Gautam Prasad. 2020. EEV Dataset: Predicting Expressions Evoked by Diverse Videos. (jan 2020). arXiv:2001.05488 http://arxiv.org/abs/2001.05488
[63]
Yla R. Tausczik and James W. Pennebaker. 2010. The psychological meaning of words: LIWC and computerized text analysis methods., 24--54 pages. https://doi.org/10.1177/0261927X09351676
[64]
Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the American Society for Information Science and Technology 61, 12 (2010), 2544--2558.
[65]
W. Richard Walker, John J. Skowronski, Jeffrey A. Gibbons, Rodney J. Vogl, and Timothy D. Ritchie. 2009. Why people rehearse their memories: Frequency of use and relations to the intensity of emotions associated with autobiographical memories. Memory 17, 7 (oct 2009), 760--773. https://doi.org/10.1080/09658210903107846
[66]
Shangfei Wang and Qiang Ji. 2015. Video Affective Content Analysis: A Survey of State-of-the-Art Methods. IEEE Transactions on Affective Computing 6, 4 (oct 2015), 410--430. https://doi.org/10.1109/TAFFC.2015.2432791
[67]
Matthias J. Wieser and Tobias Brosch. 2012. Faces in context: A review and systematization of contextual influences on affective face processing. Frontiers in Psychology 3, NOV (nov 2012), 471. https://doi.org/10.3389/fpsyg.2012.00471
[68]
Sicheng Zhao, Shangfei Wang, Mohammad Soleymani, Dhiraj Joshi, and Qiang Ji. 2020. Affective Computing for Large-scale Heterogeneous Multimedia Data. ACM Transactions on Multimedia Computing, Communications, and Applications 15, 3s (jan 2020), 1--32. https://doi.org/10.1145/3363560 arXiv:1911.05609
[69]
Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. ELRA, Valletta, Malta, 45--50.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '20: Proceedings of the 2020 International Conference on Multimodal Interaction
October 2020
920 pages
ISBN:9781450375818
DOI:10.1145/3382507
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affect detection
  2. context-awareness
  3. emotion recognition

Qualifiers

  • Research-article

Conference

ICMI '20
Sponsor:
ICMI '20: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION
October 25 - 29, 2020
Virtual Event, Netherlands

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)34
  • Downloads (Last 6 weeks)1
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild (MSECP-Wild)Proceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3616883(828-829)Online publication date: 9-Oct-2023
  • (2023) Collecting Mementos : A Multimodal Dataset for Context-Sensitive Modeling of Affect and Memory Processing in Responses to Videos IEEE Transactions on Affective Computing10.1109/TAFFC.2021.308958414:2(1249-1266)Online publication date: 1-Apr-2023
  • (2023)End-to-End Continuous Speech Emotion Recognition in Real-life Customer Service Call Center Conversations2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)10.1109/ACIIW59127.2023.10388120(1-8)Online publication date: 10-Sep-2023
  • (2023)Contextual Emotion Estimation from Image Captions2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII59096.2023.10388198(1-8)Online publication date: 10-Sep-2023
  • (2022)The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild (MSECP-Wild)Proceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3564029(803-804)Online publication date: 7-Nov-2022
  • (2022)Contextual modulation of affect: Comparing humans and deep neural networksCompanion Publication of the 2022 International Conference on Multimodal Interaction10.1145/3536220.3558036(127-133)Online publication date: 7-Nov-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media