Towards Causal Effect Estimation of Emotional Labeling of Watched Videos

Authors

  • Eanes Torres Pereira Unidade Acadêmica de Sistemas e Computação, Centro de Engenharia Elétrica e Informática, Universidade Federal de Campina Grande https://orcid.org/0000-0002-9717-794X
  • Geovane do Nascimento Silva Unidade Acadêmica de Sistemas e Computação, Centro de Engenharia Elétrica e Informática, Universidade Federal de Campina Grande

DOI:

https://doi.org/10.22456/2175-2745.111817

Keywords:

affective computing, causal inference, pattern recognition, multimedia

Abstract

Emotions play a crucial role in human life, they are measured using many approaches. There are also many methodologies for emotion elicitation. Emotion elicitation through video watching is one important approach used to create emotion datasets. However, the causation link between video content and elicited emotions was not well explained by scientific research. In this article, we present an approach for computing the causal effect of video content on elicited emotion. The Do-Calculus theory was employed for computing causal inference, and a SCM (Structured Causal Model) was proposed considering the following variables: EEG signal, age, gender, video content, like/dislike, and emotional quadrant. To evaluate the approach, EEG data were collected from volunteers watching a sample of videos from the LIRIS-ACCEDE dataset. A total of 48 causal effects was statistically evaluated in order to check the causal impact of age, gender, and video content on liking and emotion. The results show that the approach can be generalized for any dataset that contains the variables of the proposed SCM. Furthermore, the proposed approach can be applied to any other similar dataset if an appropriate SCM is provided.

Downloads

Download data is not yet available.

Author Biographies

Eanes Torres Pereira, Unidade Acadêmica de Sistemas e Computação, Centro de Engenharia Elétrica e Informática, Universidade Federal de Campina Grande

Professor da Unidade Acadêmica de Sistemas e Computação

Geovane do Nascimento Silva, Unidade Acadêmica de Sistemas e Computação, Centro de Engenharia Elétrica e Informática, Universidade Federal de Campina Grande

Aluno de Graduação da Unidade Acadêmica de Sistemas e Computação

References

KOELSTRA, S. et al. Deap: A database for emotion analysis; using physiological signals.IEEE Transactions on AffectiveComputing, IEEE, v. 3, n. 1, p. 18–31, 2012.

SOLEYMANI, M. et al. A multimodal database for affect recognition and implicit tagging.IEEE Transactions onAffective Computing, IEEE, v. 3, n. 1, p. 1–14, 2012.

ZHENG, W.-L.; LU, B.-L. Investigating critical frequency bands and channels for eeg-based emotion recognition withdeep neural networks.IEEE Transactions on Autonomous Mental Development, v. 7, n. 3, p. 162–175, 2015.

PEARL, J.Causality: Models, Reasoning, and Inference. Cambridge, England: Cambridge University Press, 2009.

LANZA, S. T.; MOORE, J. E.; BUTERA, N. M. Drawing causal inferences using propensity scores: A practical guide forcommunity psychologists.American Journal of Community Psychology, v. 52, n. 0, p. 380–392, 2013.

PEARL, J. et al.Causal Inference in Statistics: A primer. United Kingdom: Wiley, 2016.

NGUYEN, P. et al. Age and gender classification using eeg paralinguistic features. In:6th Annual International IEEEEMBS Conference on Neural Engineering. San Diego, CA, USA: IEEE, 2013. p. 1295–1298.

PUTTEN, M. J. A. M. van; OLBRICH, S.; ARNS, M. Predicting sex from brain rhytms with deep learning.NatureScientific Reports, v. 1, n. 8, p. 1–7, 2018.

PETRANTONAKIS, P. C.; HADJILEONTIADIS, L. J. Emotion recognition from eeg using higher order crossings.IEEETransactions on Information Technology in Biomedicine, v. 14, n. 2, p. 186 – 197, 2010.

PEREIRA, E. T. et al. Empirical evidence relating eeg signal duration to emotion classification performance.IEEETransactions on Affective Computing, p. 1–12, 2018.

JENKE, R.; PEER, A.; BUSS, M. Feature extraction and selection for emotion recognition from eeg.IEEE Transactionsof Affective Computing, v. 5, n. 3, p. 327 – 339, 2014.

BRADLEY, M. M.; LANG, P. J. International affective picture system. In: ZEIGLER-HILL, V.; SHACKELFORD, T. K.(Ed.).Encyclopedia of Personality and Individual Differences. Switzerland: Springer International Publishing, 2017. p. 1–4.

FOURATI, R. et al. Unsupervised learning in reservoir computing for eeg-based emotion recognition.IEEE Transactionson Affective Computing, p. 1–1, 2020.

PEREIRA, E. T.; GOMES, H. M. The role of data balancing for emotion classification using eeg signals. In:DigitalSignal Processing Conference. Beijing, China: IEEE, 2016. p. 1–6.

LIU, Y.-J. et al. Real-time movie-induced discrete emotion recognition from eeg signals.IEEE Transactions on AffectiveComputing, v. 9, n. 4, p. 550–562, 2017.

SONG, T. et al. Eeg emotion recognition using dynamical graph convolutional neural networks.IEEE Transactions onAffective Computing, p. 1–10, 2018.

ZHENG, W.-L.; ZHU, J.-Y.; LU, B.-L. Identifying stable patterns over time for emotion recognition from eeg. IEEETransactions of Affective Computing, v. 10, n. 3, p. 417 – 429, 2019.

HUNTER, M. et al. The australian eeg database.Clinical EEG and neuroscience, v. 36, n. 2, p. 76–81, 2005.

TOMESCU, M. et al. From swing to cane: Sex differences of eeg resting-state temporal patterns during maturation andaging.Developmental Cognitive Neuroscience, n. 31, p. 58–66, 2018.

VANDENBOSCH, M. M. L. J. Z. et al. Eeg-based age-prediction models as stable and heritable indicators of brainmaturational level in children and adolescents.Human Brain Mapping, Wiley, n. 40, p. 1919–1926, 2018.

HU, J. An approach to eeg-based gender recognition using entropy measurement methods.Knowledge-Based Systems,p. 1–8, 2018.

KAUR, B.; SINGH, D.; ROY, P. P. Age and gender classification using brain-computer interface.Neural Computing andApplications, Springer, v. 31, p. 5887–5900, 2019.

KAUSHIK, P. et al. Eeg-based age and gender prediction using deep blstm-lstm network model.IEEE Sensors Journal,p. 1–8, forthcoming.

SIDIROPOULOS, P. et al. Temporal video segmentation to scenes using high-level audiovisual features.IEEETransactions on Circuits and Systems for Video Technology, IEEE, v. 21, n. 8, p. 1163–1177, 2011.

SIDIROPOULOS, P.; MEZARIS, V.; KOMPATSIARIS, I. Video tomographs and a base detector selection strategy forimproving large-scale video concept detection.IEEE Transactions on Circuits and Systems for Video Technology, IEEE, v. 24,n. 7, p. 1251–1264, 2014.

TONOMURA, Y.; AKUTSU, A. Video tomography: An efficient method for camerawork extraction and motion analysis.In:International Conference on Multimedia. San Francisco, California, USA: ACM, 1994. p. 349–356.

APOSTOLIDS, E.; MEZARIS, V. Fast shot segmentation combining global and local visual descriptors. In:InternationalConference on Acoustics, Speech and Signal Processing (ICASSP). Florence, Italy: IEEE, 2014. p. 1–5.

MARKATOPOULOU, F.; MEZARIS, V.; PATRAS, I. Cascade of classifiers based on binary, non-binary and deepconvolutional network descriptors for video concept detection. In:International Conference on Image Processing (ICIP).Quebec City, QC, Canada: IEEE, 2015. p. 1–5.

VIOLA, P.; JONES, M. Rapid object detection using a boosted cascade of simple features. In:Conference on ComputerVision and Pattern Recognition. Kauai, HI, USA: IEEE Computer Society, 2001. p. 511–518.

SIMONYAN, K.; ZISSERMAN, A. Very deep convolutional networks for large-scale image recognition. In:InternationalConference on Learning Representations. San Diego, CA, USA: ICLR, 2015. p. 1–15.

MARKATOPOULOU, F.; MEZARIS, V.; PATRAS, I. Online multi-task learning for semantic concept detection in video.In:International Conference on Image Processing (ICIP). Phoenix, AZ, USA: IEEE, 2016. p. 1–5.

EATON, E.; RUVOLO, P. L. Ella: An efficient lifelong learning algorithm. In:International Conference on MachineLearning. Atlanta, Georgia, USA: PMLR, 2013. p. 507–515.

MARKATOPOULOU, F.; MEZARIS, V.; PATRAS, I. Implicit and explicit concept relations in deep neural networksfor multi-label video/image annotation.Transactions on Circuits and Systems for Video Technology, IEEE, v. 29, n. 6, p.1631–1644, 2018.

GALLES, D.; PEARL, J. Testing identifiability of causal effects. In:Conference on Uncertainty in Artificial Intelligence.Montreal, Quebec, Canada: ACM, 1995. p. 185–195.

PEARL, J.do-calculus revisited. In: FREITAS, N. de; MURPHY, K. (Ed.).Proceedings of the Twenty-Eighth Conferenceon Uncertainty in Artificial Intelligence. Corvallis, OR: AUAI Press, 2012. p. 4–11.

GONZALES, C.; TORTI, L.; WUILLEMIN, P.-H. aGrUM: a Graphical Universal Model framework. In:InternationalConference on Industrial Engineering, Other Applications of Applied Intelligent Systems. Arras, France: Springer, 2017.

BAVEYE, Y. et al. Liris-accede: A video database for affective content analysis.IEEE Transactions on AffectiveComputing, v. 6, n. 1, p. 43–55, 2015.

LEITE, N. M. N. et al. Deep convolutional autoencoder for eeg noise filtering. In:International Conference onBionformatics and Biomedicine (BIBM). Madrid, Spain: IEEE, 2018. p. 2605–2612.

BAREINBOIM, E.; PEARL, J. Causal inference and the data-fusion problem.Proceedings of the National Academy ofSciences, v. 113, n. 27, p. 7345 – 7352, 2016.

Downloads

Published

2022-05-16

How to Cite

Pereira, E. T., & do Nascimento Silva, G. (2022). Towards Causal Effect Estimation of Emotional Labeling of Watched Videos. Revista De Informática Teórica E Aplicada, 29(2), 48–62. https://doi.org/10.22456/2175-2745.111817

Issue

Section

Regular Papers

Most read articles by the same author(s)