Estimation of User's State during a Dialog Turn with Sequential Multi-modal Features

Yuya Chiba, Masashi Ito, Akinori Ito

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

Spoken dialog system (SDS) is a typical speech application and sometimes regarded as one of ideal interfaces. However, most of conventional SDSs cannot help their user while waiting for input utterance since they treat a user's utterance as a trigger of processing. This architecture is largely different from the manner of human-human interaction and factor that makes the user feel inconvenience when they cannot respond to the system's prompt appropriately. To solve this problem, the system should be able to estimate the internal state of the user before observing the user's input utterance. In present paper, we proposed twostep discrimination method using multi-modal information to estimate the user's state frame by frame.

Original languageEnglish
Title of host publicationHCI International 2013 - Posters' Extended Abstracts - International Conference, HCI International 2013, Proceedings
PublisherSpringer-Verlag
Pages572-576
Number of pages5
EditionPART II
ISBN (Print)9783642394751
DOIs
Publication statusPublished - 2013 Jan 1
Event15th International Conference on Human-Computer Interaction, HCI International 2013 - Las Vegas, NV, United States
Duration: 2013 Jul 212013 Jul 26

Publication series

NameCommunications in Computer and Information Science
NumberPART II
Volume374
ISSN (Print)1865-0929

Other

Other15th International Conference on Human-Computer Interaction, HCI International 2013
CountryUnited States
CityLas Vegas, NV
Period13/7/2113/7/26

Keywords

  • Multi-modal information
  • Spoken dialog system
  • User modeling

ASJC Scopus subject areas

  • Computer Science(all)

Cite this