Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- review-articleDecember 2013
Smart material interfaces: "another step to a material future"
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 611–612https://doi.org/10.1145/2522848.2535893Smart Materials have physical properties that can be changed or controlled by external stimuli such as electric or magnetic fields, temperature or stress. Shape, size and color are among the properties that can be changed. Smart Material Interfaces are ...
- research-articleDecember 2013
Problem solving, domain expertise and learning: ground-truth performance results for math data corpus
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 569–574https://doi.org/10.1145/2522848.2533791Problem solving, domain expertise, and learning are analyzed for the Math Data Corpus, which involves multimodal data on collaborating student groups as they solve math problems together across sessions. Compared with non-expert students, domain experts ...
- research-articleDecember 2013
Multimodal learning analytics: description of math data corpus for ICMI grand challenge workshop
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 563–568https://doi.org/10.1145/2522848.2533790This paper provides documentation on dataset resources for establishing a new research area called multimodal learning analytics (MMLA). Research on this topic has the potential to transform the future of educational practice and technology, as well as ...
- research-articleDecember 2013
ChAirGest: a challenge for multimodal mid-air gesture recognition for close HCI
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 483–488https://doi.org/10.1145/2522848.2532590In this paper, we present a research oriented open challenge focusing on multimodal gesture spotting and recognition from continuous sequences in the context of close human-computer interaction. We contextually outline the added value of the proposed ...
- research-articleDecember 2013
Fusing multi-modal features for gesture recognition
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 453–460https://doi.org/10.1145/2522848.2532589This paper proposes a novel multi-modal gesture recognition framework and introduces its application to continuous sign language recognition. A Hidden Markov Model is used to construct the audio feature classifier. A skeleton feature classifier is ...
-
- research-articleDecember 2013
Gesture spotting and recognition using salience detection and concatenated hidden markov models
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 489–494https://doi.org/10.1145/2522848.2532588We developed a gesture salience based hand tracking method, and a gesture spotting and recognition method based on concatenated hidden Markov models. A 3-fold cross validation using the ChAirGest development data set with 10 users gives an F1 score of ...
- keynoteDecember 2013
Hands and speech in space: multimodal interaction with augmented reality interfaces
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 379–380https://doi.org/10.1145/2522848.2532202Augmented Reality (AR) is technology that allows virtual imagery to be seamlessly integrated into the real world. Although first developed in the 1960's it has only been recently that AR has become widely available, through platforms such as the web and ...
- keynoteDecember 2013
Giving interaction a hand: deep models of co-speech gesture in multimodal systems
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 245–246https://doi.org/10.1145/2522848.2532201Humans frequently join words and gestures for multimodal communication. Such natural co-speech gesturing goes far beyond what can be currently processed by gesture-based interfaces and especially its coordination with speech still poses open challenges ...
- posterDecember 2013
Persuasiveness in social multimedia: the role of communication modality and the challenge of crowdsourcing annotations
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 321–324https://doi.org/10.1145/2522848.2532198With an exponential growth in social multimedia contents online, there is an increasing importance of understanding why and how some contents are perceived as persuasive while others are ignored. This paper outlines my research goals in understanding ...
- posterDecember 2013
Modeling semantic aspects of gaze behavior while catalog browsing
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 357–360https://doi.org/10.1145/2522848.2532197Gaze behavior is one of crucial clues to understand human mental states. The goal of this study is to build a probabilistic model that represents relationships between users' gaze behavior and user states while catalog browsing. In the proposed model, ...
- posterDecember 2013
Towards a dynamic view of personality: multimodal classification of personality states in everyday situations
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 325–328https://doi.org/10.1145/2522848.2532195A new perspective in the automatic recognition of personality is proposed; shifting our focus from the traditional goal of using behaviors to infer about personality traits, to the classification of excerpts of social behavior into personality states. ...
- posterDecember 2013
Designing effective multimodal behaviors for robots: a data-driven perspective
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 329–332https://doi.org/10.1145/2522848.2532189Robots need to effectively use multimodal behaviors, including speech, gaze, and gestures, in support of their users to achieve intended interaction goals, such as improved task performance. This proposed research concerns designing effective multimodal ...
- demonstrationDecember 2013
TaSST: affective mediated touch
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 315–316https://doi.org/10.1145/2522848.2531758Communication with others occurs through a multitude of signals, such as speech, facial expressions, and body postures. Understudied in this regard is the way we use our sense of touch in social communication. In this paper we present the TaSST (Tactile ...
- demonstrationDecember 2013
A haptic touchscreen interface for mobile devices
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 311–312https://doi.org/10.1145/2522848.2531757In this paper, we present a haptic touchscreen interface for mobile devices. A surface actuator composed of two parallel plates is mounted between a touch panel and a display module. It generates haptic feedback when a user input on a touch screen. The ...
- demonstrationDecember 2013
A social interaction system for studying humor with the Robot NAO
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 313–314https://doi.org/10.1145/2522848.2531756The video of our demonstrator presents a social interaction system for studying humor with the Aldebaran robot NAO. Our application records and analyzes audio and video stream to provide real-time feedback. Using this dialog system during show & tell ...
- demonstrationDecember 2013
Talk ROILA to your Robot
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 317–318https://doi.org/10.1145/2522848.2531752In our research we present a speech recognition friendly artificial language that is specially designed and implemented for humans to talk to robots. We call this language Robot Interaction Language (ROILA). In this paper, we describe our current work ...
- demonstrationDecember 2013
Robotic learning companions for early language development
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 71–72https://doi.org/10.1145/2522848.2531750Research from the past two decades indicates that preschool is a critical time for children's oral language and vocabulary development, which in turn is a primary predictor of later academic success. However, given the inherently social nature of ...
- posterDecember 2013
Predicting speech overlaps from speech tokens and co-occurring body behaviours in dyadic conversations
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 157–164https://doi.org/10.1145/2522848.2522893This paper deals with speech overlaps in dyadic video record-ed spontaneous conversations. Speech overlaps are quite common in everyday conversations and it is therefore important to study their occurrences in different communicative situations and ...
- posterDecember 2013
Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 181–188https://doi.org/10.1145/2522848.2522890We report on an analysis of feedback behavior in an Active Listening Corpus as produced verbally, visually (head movement) and bimodally. The behavior is modeled in an embodied conversational agent and displayed in a conversation with a real human to ...
- research-articleDecember 2013
Five key challenges in end-user development for tangible and embodied interaction
ICMI '13: Proceedings of the 15th ACM on International conference on multimodal interactionPages 247–254https://doi.org/10.1145/2522848.2522887As tangible and embodied systems are making the transition from the labs to everyday life, there is a growth in the applications related research and design work in this field. We argue that the potential of these technologies can be even further ...