Visual attention model for manipulating human attention by a robot

Yusuke Tamura, Shiro Yano, Hisashi Osumi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

For smooth interaction between human and robot, the robot should have an ability to manipulate human attention and behaviors. In this study, we developed a visual attention model for manipulating human attention by a robot. The model consists of two modules, such as the saliency map generation module and manipulation map generation module. The saliency map describes the bottom-up effect of visual stimuli on human attention and the manipulation map describes the top-down effect of face, hands and gaze. In order to evaluate the proposed attention model, we measured human gaze points during watching a magic video, and applied the attention model to the video. Based on the result of this experiment, the proposed attention model can better explain human visual attention than the original saliency map.

Original languageEnglish
Title of host publicationProceedings - IEEE International Conference on Robotics and Automation
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5307-5312
Number of pages6
ISBN (Electronic)9781479936854, 9781479936854
DOIs
Publication statusPublished - 2014 Sep 22
Externally publishedYes
Event2014 IEEE International Conference on Robotics and Automation, ICRA 2014 - Hong Kong, China
Duration: 2014 May 312014 Jun 7

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
ISSN (Print)1050-4729

Other

Other2014 IEEE International Conference on Robotics and Automation, ICRA 2014
Country/TerritoryChina
CityHong Kong
Period14/5/3114/6/7

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Visual attention model for manipulating human attention by a robot'. Together they form a unique fingerprint.

Cite this