Skip to main navigation Skip to search Skip to main content

Prosody-driven robot ARM gestures generation in human-robot interaction

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In multimodal human-robot interaction(HRI), the process of communication can be established through verbal, non-verbal, and/or para-verbal cues. The linguistic literature [3] shows that para-verbal and non-verbal communications are naturally synchronized. This research focuses on the relation between non-verbal and para-verbal communication by mapping prosody cues to the corresponding arm gestures. Our approach for synthesizing arm gestures uses the coupled hidden Markov models (CHMMs), which could be seen as a collection of HMMs modeling the segmented prosodic characteristics' stream and the segmented rotation characteristics' streams of the two arms' articulations [4][1]. Nao robot was used for tests.

Original languageEnglish
Title of host publicationHRI'12 - Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction
Pages257-258
Number of pages2
DOIs
Publication statusPublished - 26 Apr 2012
Event7th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI'12 - Boston, MA, United States
Duration: 5 Mar 20128 Mar 2012

Publication series

NameHRI'12 - Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction

Conference

Conference7th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI'12
Country/TerritoryUnited States
CityBoston, MA
Period5/03/128/03/12

Keywords

  • chmm
  • human-robot interaction
  • non-verbal and para-verbal mapping

Fingerprint

Dive into the research topics of 'Prosody-driven robot ARM gestures generation in human-robot interaction'. Together they form a unique fingerprint.

Cite this