TY - GEN
T1 - Robot self-recognition via facial expression sensorimotor learning
AU - Shangguan, Zhegong
AU - Ding, Mengyuan
AU - Yu, Chuang
AU - Chen, Chaona
AU - Tapus, Adriana
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - To develop robots that can show cognitive functions, we must learn from the knowledge of human cognition. Existing biological and psychological evidence suggests that self-face perception and sensorimotor learning mechanisms play a crucial role in self-recognition. However, one of the most important self-identity cues - facial information - has not been extensively studied in the robot self-recognition task. Current research on robot self-recognition primarily relies on the recognition of high-precision targets and tracking of manipulator motions, where the self-perception of facial information is not well studied. In this work, we propose a novel approach to achieve self-recognition via self-perception of facial expressions. Specifically, we developed a Conditional Generative Adversarial Network (CGAN) model using the knowledge on human cognitive and sensorimotor functions. It allows the robot to be aware of self-face (i.e., off-line model). Passing the observed visual variations in a mirror and comparing them to self-perceptive information, the robot can recognize the self through an online Bayesian learning regression. The results of our first experiment show that the robot can recognize itself in a mirror. The results from the second experiment show that our algorithm could be tricked by a similar robot with the same facial expressions, which is similar to the rubber hand illusion (RHI).
AB - To develop robots that can show cognitive functions, we must learn from the knowledge of human cognition. Existing biological and psychological evidence suggests that self-face perception and sensorimotor learning mechanisms play a crucial role in self-recognition. However, one of the most important self-identity cues - facial information - has not been extensively studied in the robot self-recognition task. Current research on robot self-recognition primarily relies on the recognition of high-precision targets and tracking of manipulator motions, where the self-perception of facial information is not well studied. In this work, we propose a novel approach to achieve self-recognition via self-perception of facial expressions. Specifically, we developed a Conditional Generative Adversarial Network (CGAN) model using the knowledge on human cognitive and sensorimotor functions. It allows the robot to be aware of self-face (i.e., off-line model). Passing the observed visual variations in a mirror and comparing them to self-perceptive information, the robot can recognize the self through an online Bayesian learning regression. The results of our first experiment show that the robot can recognize itself in a mirror. The results from the second experiment show that our algorithm could be tricked by a similar robot with the same facial expressions, which is similar to the rubber hand illusion (RHI).
U2 - 10.1109/RO-MAN57019.2023.10309548
DO - 10.1109/RO-MAN57019.2023.10309548
M3 - Conference contribution
AN - SCOPUS:85187019332
T3 - IEEE International Workshop on Robot and Human Communication, RO-MAN
SP - 2591
EP - 2597
BT - 2023 32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2023
PB - IEEE Computer Society
T2 - 32nd IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2023
Y2 - 28 August 2023 through 31 August 2023
ER -