TY - GEN
T1 - Towards a Framework for Social Robot Co-speech Gesture Generation with Semantic Expression
AU - Zhang, Heng
AU - Yu, Chuang
AU - Tapus, Adriana
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2022.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - The ability to express semantic co-speech gestures in an appropriate manner of the robot is needed for enhancing the interaction between humans and social robots. However, most of the learning-based methods in robot gesture generation are unsatisfactory in expressing the semantic gesture. Many generated gestures are ambiguous, making them difficult to deliver the semantic meanings accurately. In this paper, we proposed a robot gesture generation framework that can effectively improve the semantic gesture expression ability of social robots. In this framework, the semantic words in a sentence are selected and expressed by clear and understandable co-speech gestures with appropriate timing. In order to test the proposed method, we designed an experiment and conducted the user study. The result shows that the performances of the gesture generated by the proposed method are significantly improved compared to the baseline gesture in three evaluation factors: human-likeness, naturalness and easiness to understand.
AB - The ability to express semantic co-speech gestures in an appropriate manner of the robot is needed for enhancing the interaction between humans and social robots. However, most of the learning-based methods in robot gesture generation are unsatisfactory in expressing the semantic gesture. Many generated gestures are ambiguous, making them difficult to deliver the semantic meanings accurately. In this paper, we proposed a robot gesture generation framework that can effectively improve the semantic gesture expression ability of social robots. In this framework, the semantic words in a sentence are selected and expressed by clear and understandable co-speech gestures with appropriate timing. In order to test the proposed method, we designed an experiment and conducted the user study. The result shows that the performances of the gesture generated by the proposed method are significantly improved compared to the baseline gesture in three evaluation factors: human-likeness, naturalness and easiness to understand.
KW - Human-robot interaction
KW - Robot gesture
KW - Semantic
KW - Social robot
UR - https://www.scopus.com/pages/publications/85149869686
U2 - 10.1007/978-3-031-24667-8_10
DO - 10.1007/978-3-031-24667-8_10
M3 - Conference contribution
AN - SCOPUS:85149869686
SN - 9783031246661
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 110
EP - 119
BT - Social Robotics - 14th International Conference, ICSR 2022, Proceedings
A2 - Cavallo, Filippo
A2 - Fiorini, Laura
A2 - Sorrentino, Alessandra
A2 - Cabibihan, John-John
A2 - He, Hongsheng
A2 - Liu, Xiaorui
A2 - Matsumoto, Yoshio
A2 - Ge, Shuzhi Sam
PB - Springer Science and Business Media Deutschland GmbH
T2 - 14th International Conference on Social Robotics, ICSR 2022
Y2 - 13 December 2022 through 16 December 2022
ER -