ACT2G: Attention-based Contrastive Learning for Text-to-Gesture Generation
Abstract
References
Recommendations
Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality
Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although gaze has been long studied as input modality for interaction, this has previously ignored the coordination of the eyes, head and body. This article ...
User centered gesture development for smart lighting
HCIK '16: Proceedings of HCI KoreaThe aim of this study is to investigate hand gesture expression and to understand it when controlling smart lighting system. The technology development has brought us the smart device which we can control multifunction of one or more systems. In order ...
Crossmodal Clustered Contrastive Learning: Grounding of Spoken Language to Gesture
ICMI '21 Companion: Companion Publication of the 2021 International Conference on Multimodal InteractionCrossmodal grounding is a key technical challenge when generating relevant and well-timed gestures from spoken language. Often, the same gesture can accompany semantically different spoken language phrases which makes crossmodal grounding especially ...
Comments
Information & Contributors
Information
Published In
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
- Research
- Refereed
Funding Sources
- JSPS/KAKENHI
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 117Total Downloads
- Downloads (Last 12 months)67
- Downloads (Last 6 weeks)12
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in