TY - GEN
T1 - Face Recognition in Video Streams for Mobile Assistive Devices Dedicated to Visually Impaired
AU - Tapu, Ruxandra
AU - Mocanu, Bogdan
AU - Zaharia, Titus
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - In this paper, we introduce a novel face detection and recognition system based on deep convolutional networks, designed to improve the visually impaired users' interaction and communication in social encounters. A first feature of the proposed architecture concerns a face detection system able to identify various persons existent in the scene regardless of the subject location or pose. Then, the faces are tracked between successive frames using a CNN (Convolutional Neural Networks) based tracker trained offline with generic motion patterns. The system can handle face occlusion, rotation or pose variation, as well as important illumination changes. Finally, the faces are recognized, in real-time, directly from the video stream. The major contribution of the paper consists in a novel weight adaptation scheme able to determine the relevance of face instances and to create a global, fixed-size representation from all face instances tracked during the video stream. The experimental evaluation performed on a set of 30 video elements validates the approach with average detection and recognition scores superior to 85%.
AB - In this paper, we introduce a novel face detection and recognition system based on deep convolutional networks, designed to improve the visually impaired users' interaction and communication in social encounters. A first feature of the proposed architecture concerns a face detection system able to identify various persons existent in the scene regardless of the subject location or pose. Then, the faces are tracked between successive frames using a CNN (Convolutional Neural Networks) based tracker trained offline with generic motion patterns. The system can handle face occlusion, rotation or pose variation, as well as important illumination changes. Finally, the faces are recognized, in real-time, directly from the video stream. The major contribution of the paper consists in a novel weight adaptation scheme able to determine the relevance of face instances and to create a global, fixed-size representation from all face instances tracked during the video stream. The experimental evaluation performed on a set of 30 video elements validates the approach with average detection and recognition scores superior to 85%.
KW - Assistive device
KW - Deep convolutional networks
KW - Face recognition in video stream
KW - Visually impaired users
UR - https://www.scopus.com/pages/publications/85065914042
U2 - 10.1109/SITIS.2018.00030
DO - 10.1109/SITIS.2018.00030
M3 - Conference contribution
AN - SCOPUS:85065914042
T3 - Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018
SP - 137
EP - 142
BT - Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018
A2 - Chbeir, Richard
A2 - di Baja, Gabriella Sanniti
A2 - Gallo, Luigi
A2 - Yetongnon, Kokou
A2 - Dipanda, Albert
A2 - Castrillon-Santana, Modesto
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018
Y2 - 26 November 2018 through 29 November 2018
ER -