Fast Online Video Pose Estimation by Dynamic Bayesian Modeling of Mode Transitions

Fast Online Video Pose Estimation by Dynamic Bayesian Modeling of Mode Transitions

Abstract

We propose a fast online video pose estimation method to detect and track human upper-body poses based on a conditional dynamic Bayesian modeling of pose modes without referring to future frames. The estimation of human body poses from videos is an important task with many applications. Our method extends fast image-based pose estimation to live video streams by leveraging the temporal correlation of articulated poses between frames. Video pose estimation is inferred over a time window using a conditional dynamic Bayesian network (CDBN), which we term time-windowed CDBN. Specifically, latent pose modes and their transitions are modeled and co-determined from the combination of three modules: 1) inference based on current observations; 2) the modeling of mode-to-mode transitions as a probabilistic prior; and 3) the modeling of state-to-mode transitions using a multimode softmax regression. Given the predicted pose modes, the body poses in terms of arm joint locations can then be determined more accurately and robustly. Our method is suitable to investigate high frame rate (HFR) scenarios, where pose mode transitions can effectively capture action-related temporal information to boost performance. We evaluate our method on a newly collected HFR-Pose dataset and four major video pose datasets (VideoPose2, TUM Kitchen, FLIC, and Penn_Action). Our method achieves improvements in both accuracy and efficiency over existing online video pose estimation methods.