Skip to main navigation Skip to search Skip to main content

Speech to head gesture mapping in multimodal human-robot interaction

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics, their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robots are reported.

Original languageEnglish
Title of host publicationService Orientation in Holonic and Multi-Agent Manufacturing Control
EditorsTheodor Borangiu, Andre Thomas, Damien Trentesaux
Pages183-196
Number of pages14
DOIs
Publication statusPublished - 20 Apr 2012

Publication series

NameStudies in Computational Intelligence
Volume402
ISSN (Print)1860-949X

Keywords

  • Coupled HMM
  • audio-video signal synchronization
  • human-robot interaction
  • robot services
  • signal mapping

Fingerprint

Dive into the research topics of 'Speech to head gesture mapping in multimodal human-robot interaction'. Together they form a unique fingerprint.

Cite this