Cited By
View all- Shi CZhang YLiu B(2024)A multimodal fusion-based deep learning framework combined with local-global contextual TCNs for continuous emotion recognition from videosApplied Intelligence10.1007/s10489-024-05329-w54:4(3040-3057)Online publication date: 1-Feb-2024
- Karas VSchuller DSchuller B(2023)Audiovisual Affect Recognition for Autonomous Vehicles: Applications and Future AgendasIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2023.333374925:6(4918-4932)Online publication date: 1-Dec-2023
- Kefalas TFotiadou EGeorgopoulos MPanagakis YMa PPetridis SStafylakis TPantic M(2023)KAN-AV dataset for audio-visual face and speech analysis in the wildImage and Vision Computing10.1016/j.imavis.2023.104839140:COnline publication date: 1-Dec-2023