Deep Attribute Guided Representation for Heterogeneous Face Recognition
Deep Attribute Guided Representation for Heterogeneous Face Recognition
Decheng Liu, Nannan Wang, Chunlei Peng, Jie Li, Xinbo Gao
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 835-841.
https://doi.org/10.24963/ijcai.2018/116
Heterogeneous face recognition (HFR) is a challenging problem in face recognition, subject to large texture and spatial structure differences of face images. Different from conventional face recognition in homogeneous environments, there exist many face images taken from different sources (including different sensors or different mechanisms) in reality. Motivated by human cognitive mechanism, we naturally utilize the explicit invariant semantic information (face attributes) to help address the gap of different modalities. Existing related face recognition methods mostly regard attributes as the high level feature integrated with other engineering features enhancing recognition performance, ignoring the inherent relationship between face attributes and identities. In this paper, we propose a novel deep attribute guided representation based heterogeneous face recognition method (DAG-HFR) without labeling attributes manually. Deep convolutional networks are employed to directly map face images in heterogeneous scenarios to a compact common space where distances mean similarities of pairs. An attribute guided triplet loss (AGTL) is designed to train an end-to-end HFR network which could effectively eliminate defects of incorrectly detected attributes. Extensive experiments on multiple heterogeneous scenarios (composite sketches, resident ID cards) demonstrate that the proposed method achieves superior performances compared with state-of-the-art methods.
Keywords:
Computer Vision: Biometrics, Face and Gesture Recognition
Computer Vision: Computer Vision