Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleApril 2024
Looking for a better fit? An Incremental Learning Multimodal Object Referencing Framework adapting to Individual Drivers
IUI '24: Proceedings of the 29th International Conference on Intelligent User InterfacesPages 1–13https://doi.org/10.1145/3640543.3645152The rapid advancement of the automotive industry towards automated and semi-automated vehicles has rendered traditional methods of vehicle interaction, such as touch-based and voice command systems, inadequate for a widening range of non-driving related ...
- short-paperOctober 2023
Towards Adaptive User-centered Neuro-symbolic Learning for Multimodal Interaction with Autonomous Systems
ICMI '23: Proceedings of the 25th International Conference on Multimodal InteractionPages 689–694https://doi.org/10.1145/3577190.3616121Recent advances in deep learning and data-driven approaches have facilitated the perception of objects and their environments in a perceptual subsymbolic manner. Thus, these autonomous systems can now perform object detection, sensor data fusion, and ...
- short-paperNovember 2022
Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces
ICMI '22: Proceedings of the 2022 International Conference on Multimodal InteractionPages 690–695https://doi.org/10.1145/3536221.3557034With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively ...