Visual attention model for manipulating human attention by a robot

Yusuke Tamura, Shiro Yano, Hisashi Osumi

Research output: Contribution to journalConference articlepeer-review

3 Citations (Scopus)

Abstract

For smooth interaction between human and robot, the robot should have an ability to manipulate human attention and behaviors. In this study, we developed a visual attention model for manipulating human attention by a robot. The model consists of two modules, such as the saliency map generation module and manipulation map generation module. The saliency map describes the bottom-up effect of visual stimuli on human attention and the manipulation map describes the top-down effect of face, hands and gaze. In order to evaluate the proposed attention model, we measured human gaze points during watching a magic video, and applied the attention model to the video. Based on the result of this experiment, the proposed attention model can better explain human visual attention than the original saliency map.

Original languageEnglish
Article number6907639
Pages (from-to)5307-5312
Number of pages6
JournalProceedings - IEEE International Conference on Robotics and Automation
DOIs
Publication statusPublished - 2014 Sep 22
Externally publishedYes
Event2014 IEEE International Conference on Robotics and Automation, ICRA 2014 - Hong Kong, China
Duration: 2014 May 312014 Jun 7

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Visual attention model for manipulating human attention by a robot'. Together they form a unique fingerprint.

Cite this