Point attention network for gesture recognition using point cloud data

Cherdsak Kingkan, Joshua Owoyemi, Koichi Hashimoto

Research output: Contribution to conferencePaperpeer-review

3 Citations (Scopus)

Abstract

In this work, we propose a neural network with novelty attention modules to perform human gesture recognition from point cloud data. Our network directly takes point clouds as input, and the attention modules learn to pay attention to only points that contribute to more accurate gesture recognition. The input point cloud consists of both spatial and temporal dimensions since the point cloud frames are concatenated as one input sample. This enables the network to learn the spatio-temporal features of the data without explicit modeling of gesture dynamics. The gather and scatter operations are proposed for points downsampling and upsampling in the feature space. We evaluate the performance of our network using a dataset of common Japanese gestures. The proposed network achieves state-of-the-art performance on this dataset. The analysis of architecture design and parameter choices are also discussed.

Original languageEnglish
Publication statusPublished - 2019 Jan 1
Event29th British Machine Vision Conference, BMVC 2018 - Newcastle, United Kingdom
Duration: 2018 Sep 32018 Sep 6

Conference

Conference29th British Machine Vision Conference, BMVC 2018
CountryUnited Kingdom
CityNewcastle
Period18/9/318/9/6

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Point attention network for gesture recognition using point cloud data'. Together they form a unique fingerprint.

Cite this