TY - JOUR
T1 - Activity recognition using gazed text and viewpoint information for user support systems
AU - Chiba, Shun
AU - Miyazaki, Tomo
AU - Sugaya, Yoshihiro
AU - Omachi, Shinichiro
N1 - Funding Information:
Funding: This work was partially supported by JSPS KAKENHI Grant Numbers JP16H02841, JP16K00259.
Publisher Copyright:
© 2018 by the authors.
PY - 2018/8/2
Y1 - 2018/8/2
N2 - The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment.
AB - The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment.
KW - Activity recognition
KW - Eye tracker
KW - Fisheye camera
KW - Viewpoint information
UR - http://www.scopus.com/inward/record.url?scp=85051811862&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85051811862&partnerID=8YFLogxK
U2 - 10.3390/jsan7030031
DO - 10.3390/jsan7030031
M3 - Article
AN - SCOPUS:85051811862
VL - 7
JO - Journal of Sensor and Actuator Networks
JF - Journal of Sensor and Actuator Networks
SN - 2224-2708
IS - 3
M1 - 31
ER -