skip to main content
article

Synthesizing multimodal utterances for conversational agents: Research Articles

Published: 01 March 2004 Publication History

Abstract

Conversational agents are supposed to combine speech with non-verbal modalities for intelligible multimodal utterances. In this paper, we focus on the generation of gesture and speech from XML-based descriptions of their overt form. An incremental production model is presented that combines the synthesis of synchronized gestural, verbal, and facial behaviors with mechanisms for linking them in fluent utterances with natural co-articulation and transition effects. In particular, an efficient kinematic approach for animating hand gestures from shape specifications is presented, which provides fine adaptation to temporal constraints that are imposed by cross-modal synchrony. Copyright © 2004 John Wiley & Sons, Ltd.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Computer Animation and Virtual Worlds
Computer Animation and Virtual Worlds  Volume 15, Issue 1
March 2004
60 pages
ISSN:1546-4261
EISSN:1546-427X
Issue’s Table of Contents

Publisher

John Wiley and Sons Ltd.

United Kingdom

Publication History

Published: 01 March 2004

Author Tags

  1. gesture animation
  2. model-based computer animation
  3. motion control
  4. multimodal conversational agents

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media