Multi-modal interface in multi-display environment for multi-users

Yoshifumi Kitamura, Satoshi Sakurai, Tokuo Yamaguchi, Ryo Fukazawa, Yuichi Itoh, Fumio Kishino

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Multi-display environments (MDEs) are becoming more and more common. By introducing multi-modal interaction techniques such as gaze, body/hand and gestures, we established a sophisticated and intuitive interface for MDEs where the displays are stitched seamlessly and dynamically according to the users' viewpoints. Each user can interact with the multiple displays as if she is in front of an ordinary desktop GUI environment.

Original languageEnglish
Title of host publicationHuman-Computer Interaction
Subtitle of host publicationNovel Interaction Methods and Techniques - 13th International Conference, HCI International 2009, Proceedings
Pages66-74
Number of pages9
EditionPART 2
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event13th International Conference on Human-Computer Interaction, HCI International 2009 - San Diego, CA, United States
Duration: 2009 Jul 192009 Jul 24

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 2
Volume5611 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other13th International Conference on Human-Computer Interaction, HCI International 2009
CountryUnited States
CitySan Diego, CA
Period09/7/1909/7/24

Keywords

  • 3D user interfaces
  • CSCW
  • Graphical user interfaces
  • Perspective correction

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint Dive into the research topics of 'Multi-modal interface in multi-display environment for multi-users'. Together they form a unique fingerprint.

Cite this