TY - GEN
T1 - What Do I Look Like? A Conditional GAN Based Robot Facial Self-Awareness Approach
AU - Zhegong, Shangguan
AU - Yu, Chuang
AU - Huang, Wenjie
AU - Sun, Zexuan
AU - Tapus, Adriana
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self and the environment is crucial for reliable and trustworthy intelligent robots. In particular, we are interested in robot facial expression awareness by using action joint data to achieve self-face perception and recognition, passing a deep learning model. Our methodology proposes the first attempt toward robot facial expression self-awareness. We discuss the crucial role of self-awareness in social robots and propose a CGAN (Conditional Generative Adversarial Network) model to generate robot facial expression images from motors’ angle parameters. By using the CGAN method, the robot learns its facial self-awareness from a series of facial images. In addition, we introduce our robots facial self-awareness dataset. Our methodology can make the robot find the difference between self and others from its current generated image. The results show good performance and demonstrate the ability to achieve real-time robot facial self-awareness.
AB - In uncertain social scenarios, the self-awareness of facial expressions helps a person to understand, predict, and control his/her states better. Self-awareness gives animals the ability to distinguish self from others and to self-recognize themselves. For cognitive robots, the ability to be aware of their actions and the effects of actions on self and the environment is crucial for reliable and trustworthy intelligent robots. In particular, we are interested in robot facial expression awareness by using action joint data to achieve self-face perception and recognition, passing a deep learning model. Our methodology proposes the first attempt toward robot facial expression self-awareness. We discuss the crucial role of self-awareness in social robots and propose a CGAN (Conditional Generative Adversarial Network) model to generate robot facial expression images from motors’ angle parameters. By using the CGAN method, the robot learns its facial self-awareness from a series of facial images. In addition, we introduce our robots facial self-awareness dataset. Our methodology can make the robot find the difference between self and others from its current generated image. The results show good performance and demonstrate the ability to achieve real-time robot facial self-awareness.
KW - Generative adversarial network
KW - Human-robot interaction
KW - Self-aware robot
U2 - 10.1007/978-3-031-24667-8_28
DO - 10.1007/978-3-031-24667-8_28
M3 - Conference contribution
AN - SCOPUS:85149850990
SN - 9783031246661
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 312
EP - 324
BT - Social Robotics - 14th International Conference, ICSR 2022, Proceedings
A2 - Cavallo, Filippo
A2 - Fiorini, Laura
A2 - Sorrentino, Alessandra
A2 - Cabibihan, John-John
A2 - He, Hongsheng
A2 - Liu, Xiaorui
A2 - Matsumoto, Yoshio
A2 - Ge, Shuzhi Sam
PB - Springer Science and Business Media Deutschland GmbH
T2 - 14th International Conference on Social Robotics, ICSR 2022
Y2 - 13 December 2022 through 16 December 2022
ER -