Multidimensional Feature Representation and Learning for Robust Hand-Gesture Recognition on Commercial Millimeter-Wave Radar

Multidimensional Feature Representation and Learning for Robust Hand-Gesture Recognition on Commercial Millimeter-Wave Radar

Abstract:

This article presents a robust hand-gesture recognition method via multidimensional feature representation and learning specifically designed for commercial frequency-modulated continuous wave (FMCW) multi-input multi-output (MIMO) millimeter-wave radar. First, the optimal configuration of the radar system parameters for the hand-gesture recognition scenario is investigated and a standard procedure to determine the system configuration is given. Then a moving scattering center model is proposed to represent the 3-D point cloud in the range-Doppler (RD)-angular multidimensional feature space. A scattering point detection and tracking algorithm is presented based on a set of motion constraints in terms of position, velocity, and acceleration. It is derived from the space-time continuity of a nonrigid target. Finally, a lightweight multichannel convolutional neural network (CNN) is designed to learn and classify multidimensional gesture features including radial RD and tangential azimuth-elevation. Extensive experiments are carried out with the developed system and a large data set is obtained to train and test the classifier. The results show that the proposed gesture recognition method can effectively distinguish gestures that are easily confused in the RD domain and achieve robust performances under various conditions.