TY - JOUR
T1 - UGaitNet
T2 - Multimodal Gait Recognition with Missing Input Modalities
AU - Marin-Jimenez, Manuel J.
AU - Castro, Francisco M.
AU - Delgado-Escano, Ruben
AU - Kalogeiton, Vicky
AU - Guil, Nicolas
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Gait recognition systems typically rely solely on silhouettes for extracting gait signatures. Nevertheless, these approaches struggle with changes in body shape and dynamic backgrounds; a problem that can be alleviated by learning from multiple modalities. However, in many real-life systems some modalities can be missing, and therefore most existing multimodal frameworks fail to cope with missing modalities. To tackle this problem, in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to missing modalities. UGaitNet handles and mingles various types and combinations of input modalities, i.e. pixel gray value, optical flow, depth maps, and silhouettes, while being camera agnostic. We evaluate UGaitNet on two public datasets for gait recognition: CASIA-B and TUM-GAID, and show that it obtains compact and state-of-the-art gait descriptors when leveraging multiple or missing modalities. Finally, we show that UGaitNet with optical flow and grayscale inputs achieves almost perfect (98.9%) recognition accuracy on CASIA-B (same-view 'normal') and 100% on TUM-GAID ('ellapsed time'). Code will be available at https://github.com/avagait/ugaitnet.
AB - Gait recognition systems typically rely solely on silhouettes for extracting gait signatures. Nevertheless, these approaches struggle with changes in body shape and dynamic backgrounds; a problem that can be alleviated by learning from multiple modalities. However, in many real-life systems some modalities can be missing, and therefore most existing multimodal frameworks fail to cope with missing modalities. To tackle this problem, in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to missing modalities. UGaitNet handles and mingles various types and combinations of input modalities, i.e. pixel gray value, optical flow, depth maps, and silhouettes, while being camera agnostic. We evaluate UGaitNet on two public datasets for gait recognition: CASIA-B and TUM-GAID, and show that it obtains compact and state-of-the-art gait descriptors when leveraging multiple or missing modalities. Finally, we show that UGaitNet with optical flow and grayscale inputs achieves almost perfect (98.9%) recognition accuracy on CASIA-B (same-view 'normal') and 100% on TUM-GAID ('ellapsed time'). Code will be available at https://github.com/avagait/ugaitnet.
KW - Gait
KW - biometrics
KW - deep learning
KW - multimodal
UR - https://www.scopus.com/pages/publications/85120855486
U2 - 10.1109/TIFS.2021.3132579
DO - 10.1109/TIFS.2021.3132579
M3 - Article
AN - SCOPUS:85120855486
SN - 1556-6013
VL - 16
SP - 5452
EP - 5462
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -