GaitAL-EI: a dual-branch CNN integrating gait energy images and video sequences for person identification
Tóm tắt: 25
|
PDF: 14
##plugins.themes.academic_pro.article.main##
Author
-
Cuong Tran-ChiThe University of Danang - University of Science and Technology, VietnamHanh T. M. TranThe University of Danang - University of Science and Technology, VietnamTien Ho-PhuocThe University of Danang - University of Science and Technology, Vietnam
Từ khóa:
Tóm tắt
Biometric-based identity recognition from images has attracted substantial research interest in recent years, driven by the growing need for reliable and non-intrusive methods of human identification. Among various approaches, gait recognition has emerged as a promising method due to its key advantages, including the ability to identify individuals at a distance and under diverse conditions. In this paper, we propose a deep convolutional neural network, called GaitAL-EI, to learn gait motion features through a dual-branch architecture that processes both gait energy images and video sequences. Our proposed method is trained and evaluated on public datasets. Comparative experiments demonstrate that the proposed approach achieves superior performance compared to the state-of-the-art GaitSet method, highlighting the effectiveness of integrating both static and dynamic gait information.
Tài liệu tham khảo
-
[1] S. Sarkar, P. J. Phillips, Z. Liu, I. R. Vega, P. Grother, and K. W. Bowyer, “The humanid gait challenge problem: Data sets, performance, and analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 162–177, 2005. https://doi.org/10.1109/TPAMI.2005.39
[2] S. Zhang, Y. Wang, T. Chai, A. Li, and A. K. Jain, “Realgait: Gait recognition for person re-identification,” arXiv preprint arXiv:2201.04806, 2022.
[3] J. Zheng, X. Liu, W. Liu, L. He, C. Yan, and T. Mei, “Gait recognition in the wild with dense 3D representations and a benchmark,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 20228–20237.
[4] R. Liao, S. Yu, W. An, and Y. Huang, “A model-based gait recognition method with body pose and human prior knowledge,” Pattern Recognition, vol. 98, 2020. https://doi.org/10.1016/j.patcog.2019.107069
[5] J. Han and B. Bhanu, “Individual recognition using gait energy image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 2, pp. 316–322, 2006. https://doi.org/10.1109/TPAMI.2006.38
[6] S. Tong, Y. Fu, X. Yue, and H. Ling, “Multi-view gait recognition based on a spatial–temporal deep neural network,” IEEE Access, vol. 6, pp. 57583–57596, 2018.
[7] W. Xing, Y. Li, and S. Zhang, “View-invariant gait recognition method by three-dimensional convolutional neural network,” Journal of Electronic Imaging, vol. 27, no. 1, p. 013015, 2018. https://doi.org/10.1117/1.JEI.27.1.013015
[8] H. Chao, Y. He, J. Zhang, and J. Feng, “GaitSet: Regarding gait as a set for cross-view gait recognition,” in Proc. AAAI Conf. Artif. Intell., vol. 33, pp. 8126–8133, 2019. https://doi.org/10.1609/aaai.v33i01.33018126
[9] K. Wang, L. Liu, W. Zhai, and W. Cheng, “Gait recognition based on GEI and 2D-PCA,” Chinese Journal of Image and Graphics, vol. 14, no. 4, pp. 695–700, 2009.
[10] S. C. Bakchy, M. R. Islam, M. R. Mahmud, and F. Imran, “Human gait analysis using gait energy image,” arXiv preprint arXiv:2203.09549, 2022.
[11] Z. Zhu et al., “Gait recognition in the wild: A benchmark,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 14789–14799. https://doi.org/10.1109/ICCV48922.2021.01452
[12] J. E. Boyd and J. J. Little, “Gait recognition, silhouette-based,” in Encyclopedia of Biometrics, S. Z. Li and A. Jain, Eds. Springer, 2015, pp. 813–820. https://doi.org/10.1007/978-0-387-73003-5_263
[13] A. Sokolova and A. Konushin, “Methods of gait recognition in video,” Programming and Computer Software, vol. 45, no. 4, pp. 213–220, 2019. https://doi.org/10.1134/S0361768819040091
[14] Z. Cui, R. Ke, Z. Pu, and Y. Wang, “Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traffic speed prediction,” arXiv preprint arXiv:1801.02143, 2018.
[15] A. Howard et al., “Searching for MobileNetV3,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2019, pp. 1314–1324. https://doi.org/10.1109/ICCV.2019.00140
[16] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted residuals and linear bottlenecks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
[17] S. Zheng, J. Zhang, K. Huang, R. He, and T. Tan, “Robust view transformation model for gait recognition,” in Proc. IEEE Int. Conf. Image Process. (ICIP), 2011, pp. 2073–2076.
[18] A. Y. Johnson, J. Sun, and A. F. Bobick, “Predicting large population data cumulative match characteristic performance from small population data,” in Proc. Audio- and Video-Based Biometric Person Authentication (AVBPA), vol. 4, pp. 821–829, 2003. https://doi.org/10.1007/3-540-44887-X_95
[19] N. Takemura, Y. Makihara, D. Muramatsu, T. Echigo, and Y. Yagi, “Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition,” IPSJ Transactions on Computer Vision and Applications, vol. 10, pp. 1–14, 2018.
[20] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
[21] E. Hoffer and N. Ailon, “Deep metric learning using triplet network,” in Proc. Similarity-Based Pattern Recognition (SIMBAD), 2015, pp. 84–92. https://doi.org/10.1007/978-3-319-24261-3_7
[22] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “ArcFace: Additive angular margin loss for deep face recognition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 4690–4699. https://doi.org/10.1109/CVPR.2019.00482

