Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2019
Effective Sentiment-relevant Word Selection for Multi-modal Sentiment Analysis in Spoken Language
MM '19: Proceedings of the 27th ACM International Conference on MultimediaPages 148–156https://doi.org/10.1145/3343031.3350987Computational modeling of human spoken language is an emerging research area in multimedia analysis spanning across the text and acoustic modalities. Multi-modal sentiment analysis is one of the most fundamental tasks in human spoken language ...
- short-paperJuly 2018
A Dependency Parser for Spontaneous Chinese Spoken Language
ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), Volume 17, Issue 4Article No.: 28, Pages 1–13https://doi.org/10.1145/3196278Dependency analysis is vital for spoken language understanding in spoken dialogue systems. However, existing research has mainly focused on western spoken languages, Japanese, and so on. Little research has been done for spoken Chinese in terms of ...
- articleJuly 2010
Construction of linefeed insertion rules for lecture transcript and their evaluation
International Journal of Knowledge and Web Intelligence (IJKWI), Volume 1, Issue 3/4Pages 227–242https://doi.org/10.1504/IJKWI.2010.034189The development of a captioning system that supports the real-time understanding of monologue speech such as lectures and commentaries is required. In monologues, since a sentence tends to be long, each sentence is often displayed in multilines on the ...
- chapterAugust 2009
Shallow Parsing of Transcribed Speech of Estonian and Disfluency Detection
Human Language Technology. Challenges of the Information SocietyAugust 2009, Pages 165–177https://doi.org/10.1007/978-3-642-04235-5_15This paper introduces our strategy for adapting a rule based parser of written language to transcribed speech. Special attention has been paid to disfluencies (repairs, repetitions and false starts). A Constraint Grammar based parser was used for ...
- ArticleSeptember 2008
Where Do Parsing Errors Come From
TSD '08: Proceedings of the 11th international conference on Text, Speech and DialoguePages 161–168https://doi.org/10.1007/978-3-540-87391-4_22This paper discusses some issues of developing a parser for spoken Estonian which is based on an already existing parser for written language, and employs the Constraint Grammar framework.
When we used a corpus of face-to-face everyday conversations as ...
- ArticleMarch 2007
Towards to mobile multimodal telecommunications systems and services
The communication itself is considered as a multimodal interactive process binding speech with other modalities. In this contribution some results of the project MobilTel (Mobile Multimodal Telecommunications System) are presented. It has provided a ...
- ArticleDecember 2006
The implementation of service enabling with spoken language of a multi-modal system ozone
ISCSLP'06: Proceedings of the 5th international conference on Chinese Spoken Language ProcessingPages 640–647https://doi.org/10.1007/11939993_65In this paper we described the architecture and key issues of the service enabling layer of a multi-modal system Ozone which is oriented for new technologies and services for emerging nomadic societies. The main objective of the Ozone system is to offer ...
- articleSeptember 2002
Speech act modeling in a spoken dialog system using a fuzzy fragment-class Markov model
Speech Communication (SPCO), Volume 38, Issue 1Pages 183–199https://doi.org/10.1016/S0167-6393(01)00052-8In a spoken dialog system, it is an important problem for the computer to identify the speech act (SA) from a user's utterance due to the variability of spoken language. In this paper, a corpus-based fuzzy fragment-class Markov model (FFCMM) is proposed ...
- ArticleMarch 2001
Just (all) the facts, ma'am
CHI EA '01: CHI '01 Extended Abstracts on Human Factors in Computing SystemsPages 133–134https://doi.org/10.1145/634067.634148AT&T Communicator is a speech-enabled telephony-based application that allows the end-user to select and reserve airline itineraries. We report the results of an experiment exploring how the amount and structure of information presented in complex lists ...
- ArticleMarch 2001
Amount of information presented in a complex list: effects on user performance
HLT '01: Proceedings of the first international conference on Human language technology researchPages 1–6https://doi.org/10.3115/1072133.1072137AT&T Communicator is a state-of-the-art speech-enabled telephony-based application that allows the end-user to, among other things, select and reserve airline itineraries. This experiment explores how the amount and structure of information presented in ...
- ArticleApril 2000
Natural-language interfaces
CHI EA '00: CHI '00 Extended Abstracts on Human Factors in Computing SystemsPage 376https://doi.org/10.1145/633292.633523The CHI research community has investigated a number of issues related to natural-language (NL) processing. These include usability of hypertext [e.g., 2], spoken-dialogue systems as interfaces [e.g., 8, 6, 7, 4], and multi-modal interaction [e.g., 1, 5]...
- ArticleApril 2000
The efficiency of multimodal interaction for a map-based task
CHI EA '00: CHI '00 Extended Abstracts on Human Factors in Computing SystemsPages 26–27https://doi.org/10.1145/633292.633311This paper compares the efficiency of using a standard direct-manipulation graphical user interface (GUI) with that of using the QuickSet pen/voice multimodal interface for supporting a military task. In this task, a user places military units and ...