PEN: Pose-Embedding Network for Pedestrian Detection

PEN: Pose-Embedding Network for Pedestrian Detection

Abstract:

In the past years, pedestrian detection has achieved significant progress via improving the visual description. However, the visual description is not robust to discover the occluded pedestrian, which is the bottleneck of the existing pedestrian methods. Targeting to overcome the shortcoming of visual description, we employ the human pose information, which is complementary to the visual description, to address the occlusion and false positive failure problems in pedestrian detection. The advantage of using human pose information is that the pose estimation model can localize the local part of the pedestrian once the pedestrian is occluded. By embedding the human pose information with the visual description, we propose a novel Pose-Embedding Network for pedestrian detection, which consists of two components: a Region Proposal Network, and a Pedestrian Recognization Network. The Region Proposal Network targets to generate lots of candidate proposals and corresponding confidence scores. Once obtaining the candidate proposals, the Pedestrian Recognization Network is proposed to distinguish pedestrian proposals by taking the visual information and pose information into consideration to refine the confidence scores and eliminate the false positives. Given the proposal image, the visual information is extracted with the Visual Feature Module. The Human Pose Module, which is proposed based on the pose estimation model, is used to predict the pose information. Further, the Classification Module is employed to fuse the visual and pose information and generates a pose-embedding pedestrian description. Extensive experiments on three challenging datasets, i.e., Caltech, CityPersons, and COCOPersons, show that the proposed approach achieves a significant improvement upon the state-of-the-art methods.