skip to main content
research-article

End-to-End Comparative Attention Networks for Person Re-Identification

Published: 01 July 2017 Publication History

Abstract

Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, <italic>i.e.</italic>, the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively <italic>comparing</italic> their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.

References

[1]
J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon, “Information-theoretic metric learning,” in Proc. 24th Int. Conf. Mach. Learn., Jun. 2007, pp. 209–216.
[2]
D. Gray and H. Tao, “Viewpoint invariant pedestrian recognition with an ensemble of localized features,” in Proc. Eur. Conf. Comput. Vis. Comput. Vis. (ECCV), Marseille, France, 2008, pp. 262–275.
[3]
M. Guillaumin, J. Verbeek, and C. Schmid, “Is that you? Metric learning approaches for face identification,” in Proc. IEEE 12th Int. Conf. Comput. Vis., Oct. 2009, pp. 498–505.
[4]
M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person re-identification by symmetry-driven accumulation of local features,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp. 2360–2367.
[5]
M. Hirzer, P. M. Roth, and H. Bischof, “Person re-identification by efficient impostor-based metric learning,” in Proc. IEEE 9th Int. Conf. Adv. Video Signal-Based Surveill. (AVSS), Sep. 2012, pp. 203–208.
[6]
A. Mignon and F. Jurie, “PCCA: A new approach for distance learning from sparse pairwise constraints,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2012, pp. 2666–2672.
[7]
M. Köstinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof, “Large scale metric learning from equivalence constraints,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2012, pp. 2288–2295.
[8]
S. Paisitkriangkrai, C. Shen, and A. van den Hengel, “Learning to rank in person re-identification with metric ensembles,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 1846–1855.
[9]
W. Li and X. Wang, “Locally aligned feature transforms across views,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 3594–3601.
[10]
R. Zhao, W. Ouyang, and X. Wang, “Unsupervised salience learning for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp. 3586–3593.
[11]
R. Zhao, W. Ouyang, and X. Wang, “Person re-identification by salience matching,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2013, pp. 2528–2535.
[12]
R. Zhao, W. Ouyang, and X. Wang, “Learning mid-level filters for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 144–151.
[13]
W. Li, R. Zhao, T. Xiao, and X. Wang, “Deepreid: Deep filter pairing neural network for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2014, pp. 152–159.
[14]
D. Yi, Z. Lei, S. Liao, and S. Z. Li, “Deep metric learning for person re-identification,” in Proc. 22nd Int. Conf. Pattern Recognit. (ICPR), Aug. 2014, pp. 34–39.
[15]
L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian, “Scalable person re-identification: A benchmark,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 1116–1124.
[16]
E. Ahmed, M. Jones, and T. K. Marks, “An improved deep learning architecture for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 3908–3916.
[17]
L. Wu, C. Shen, and A. van den Hengel. (Jan. 2016). “Personnet: Person re-identification with deep convolutional neural networks.” [Online]. Available: https://arxiv.org/abs/1601.07255
[18]
S. M. Assari, H. Idrees, and M. Shah. (Dec. 2016). “Re-identification of humans in crowds using personal, social and environmental constraints.” [Online]. Available: https://arxiv.org/abs/1612.02155
[19]
S. M. Assari, H. Idrees, and M. Shah, “Human re-identification in crowd videos using personal, social and environmental constraints,” in Proc. Eur. Conf. Comput. Vis., Sep. 2016, pp. 119–136.
[20]
N. McLaughlin, J. M. del Rincon, and P. Miller, “Recurrent convolutional network for video-based person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1325–1334.
[21]
L. An, S. Yang, and B. Bhanu, “Person re-identification by robust canonical correlation analysis,” IEEE Signal Process. Lett., vol. 22, no. 8, pp. 1103–1107, Aug. 2015.
[22]
L. An, X. Chen, S. Yang, and X. Li, “Person re-identification by multi-hypergraph fusion,” IEEE Trans. Neural Netw. Learn. Syst., to be published. 10.1109/TNNLS.2016.2602082.
[23]
L. An, M. Kafai, S. Yang, and B. Bhanu, “Person reidentification with reference descriptor,” IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 4, pp. 776–787, Apr. 2016.
[24]
S. Liao, Y. Hu, X. Zhu, and S. Z. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 2197–2206.
[25]
W.-S. Zheng, S. Gong, and T. Xiang, “Towards open-world person re-identification by one-shot group-based verification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 3, pp. 591–606, Mar. 2015.
[26]
T. Xiao, H. Li, W. Ouyang, and X. Wang, “Learning deep feature representations with domain guided dropout for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1249–1258.
[27]
D. Cheng, Y. Gong, S. Zhou, J. Wang, and N. Zheng, “Person re-identification by multi-channel parts-based CNN with improved triplet loss function,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1335–1344.
[28]
L. Zhang, T. Xiang, and S. Gong, “Learning a discriminative null space for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1239–1248.
[29]
T. Matsukawa, T. Okabe, E. Suzuki, and Y. Sato, “Hierarchical Gaussian descriptor for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1363–1372.
[30]
Y. Zhang, B. Li, H. Lu, A. Irie, and X. Ruan, “Sample-specific SVM learning for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1278–1287.
[31]
S. Liao and S. Z. Li, “Efficient PSD constrained asymmetric metric learning for person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 3685–3693.
[32]
C. Su, F. Yang, S. Zhang, Q. Tian, L. S. Davis, and W. Gao, “Multi-task learning with low rank attribute embedding for person re-identification,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 3739–3747.
[33]
Y. Shen, W. Lin, J. Yan, M. Xu, J. Wu, and J. Wang, “Person re-identification with correspondence structure learning,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 3200–3208.
[34]
N. Martinel, A. Das, C. Micheloni, and A. K. Roy-Chowdhury, “Temporal model adaptation for person re-identification,” in Proc. Eur. Conf. Comput. Vis., Sep. 2016, pp. 858–877.
[35]
H. Shi et al., “Embedding deep metric for person re-identification: A study against large variations,” in Proc. Eur. Conf. Comput. Vis., Sep. 2016, pp. 732–748.
[36]
D. Chen, Z. Yuan, B. Chen, and N. Zheng, “Similarity learning with spatial constraints for person re-identification,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 1268–1277.
[37]
S. Tan, F. Zheng, L. Liu, J. Han, and L. Shao, “Dense invariant feature based support vector ranking for cross-camera person re-identification,” IEEE Trans. Circuits Syst. Video Technol., to be published. 10.1109/TCSVT.2016.2555739.
[38]
F. Zheng and L. Shao, “Learning cross-view binary identities for fast person re-identification,” in Proc. Int. Joint Conf. Artif. Intell., 2016, pp. 2399–2406.
[39]
H. Liu et al. (Jan. 2017). “Video-based person re-identification with accumulative motion context.” [Online]. Available: https://arxiv.org/abs/1701.00193
[40]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[41]
K. Xu et al., “Show, attend and tell: Neural image caption generation with visual attention,” in Proc. Int. Conf. Mach. Learn., Jun. 2015, pp. 2048–2057.
[42]
D. Bahdanau, K. Cho, and Y. Bengio. (Sep. 2014). “Neural machine translation by jointly learning to align and translate.” [Online]. Available: https://arxiv.org/abs/1409.0473
[43]
S. Sharma, R. Kiros, and R. Salakhutdinov. (Nov. 2015). “Action recognition using visual attention.” [Online]. Available: https://arxiv.org/abs/1511.04119
[44]
D. Gray, S. Brennan, and H. Tao, “Evaluating appearance models for recognition, reacquisition, and tracking,” in Proc. IEEE Int. Workshop Perform. Eval. Tracking Surveill. (PETS), vol. 3. Oct. 2007, pp. 1–7.
[45]
H. Liu, M. Qi, and J. Jiang, “Kernelized relaxed margin components analysis for person re-identification,” IEEE Signal Process. Lett., vol. 22, no. 7, pp. 910–914, Jul. 2015.
[46]
A. Graves, N. Jaitly, and A.-R. Mohamed, “Hybrid speech recognition with deep bidirectional LSTM,” in Proc. IEEE Workshop Autom. Speech Recognit. Understand. (ASRU), Dec. 2013, pp. 273–278.
[47]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 1097–1105.
[48]
K. Simonyan and A. Zisserman. (Sep. 2014). “Very deep convolutional networks for large-scale image recognition.” [Online]. Available: https://arxiv.org/abs/1409.1556
[49]
Y. Sun, Y. Chen, X. Wang, and X. Tang, “Deep learning face representation by joint identification-verification,” in Proc. Adv. Neural Inf. Process. Syst., 2014, pp. 1988–1996.
[50]
F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 815–823.
[51]
W. Zaremba, I. Sutskever, and O. Vinyals. (Sep. 2014). “Recurrent neural network regularization.” [Online]. Available: https://arxiv.org/abs/1409.2329
[52]
P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, Sep. 2010.
[53]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2009, pp. 248–255.
[54]
Y. Jia et al. (Jun. 2014). “Caffe: Convolutional architecture for fast feature embedding.” [Online]. Available: https://arxiv.org/abs/1408.5093
[55]
L. Bottou, “Stochastic gradient tricks,” in Neural Networks, Tricks of the Trade, Reloaded. Berlin, Germany: Springer, 2012, pp. 430–445.

Cited By

View all

Index Terms

  1. End-to-End Comparative Attention Networks for Person Re-Identification
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Image Processing
        IEEE Transactions on Image Processing  Volume 26, Issue 7
        July 2017
        522 pages

        Publisher

        IEEE Press

        Publication History

        Published: 01 July 2017

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 09 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media