Fusing Posture and Position Representations for Point Cloud-based Hand Gesture Recognition

Authors:

Alexander Bigalke and Mattias Heinrich

Abstract:

Hand gesture recognition can benefit from directly processing 3D point cloud sequences, which carry rich geometric information and enable the learning of expressive spatio-temporal features. However, currently employed single-stream models cannot sufficiently capture multi-scale features that include both fine-grained local posture variations and global hand movements. We therefore propose a novel dual-stream model, which decouples the learning of local and global features. These are eventually fused in an LSTM for temporal modelling. To induce the global and local stream to capture complementary position and posture features, we propose the use of different 3D learning architectures in both streams. Specifically, state-of-the-art point cloud networks excel at capturing fine posture variations from raw point clouds in the local stream. To track hand movements in the global stream, we combine an encoding with residual basis point sets and a fully-connected DenseNet. We evaluate the method on the Shrec’17 and DHG dataset and report state-of-the-art results at a reduced computational cost. Source code is available at https://anonymous.4open.science/r/hand-gesture-3562.

PDF (protected)


  Important Dates

All deadlines are 23:59 Pacific Time (PT). No extensions will be granted.

Paper registration July 23 30, 2021
Paper submission July 30, 2021
Supplementary August 8, 2021
Tutorial submission August 15, 2021
Tutorial notification August 31, 2021
Rebuttal period September 16-22, 2021
Paper notification October 1, 2021
Camera ready October 15, 2021
Demo submission July 30 Nov 15, 2021
Demo notification Oct 1 Nov 19, 2021
Tutorial November 30, 2021
Main conference December 1-3, 2021

  Sponsors