Passer à la navigation principale Passer à la recherche Passer au contenu principal

Speech to head gesture mapping in multimodal human-robot interaction

Résultats de recherche: Le chapitre dans un livre, un rapport, une anthologie ou une collectionContribution à une conférenceRevue par des pairs

Résumé

In human-human interaction, para-verbal and non-verbal communication are naturally aligned and synchronized. The difficulty encountered during the coordination between speech and head gestures concerns the conveyed meaning, the way of performing the gesture with respect to speech characteristics, their relative temporal arrangement, and their coordinated organization in a phrasal structure of utterance. In this research, we focus on the mechanism of mapping head gestures and speech prosodic characteristics in a natural human-robot interaction. Prosody patterns and head gestures are aligned separately as a parallel multi-stream HMM model. The mapping between speech and head gestures is based on Coupled Hidden Markov Models (CHMMs), which could be seen as a collection of HMMs, one for the video stream and one for the audio stream. Experimental results with Nao robots are reported.

langue originaleAnglais
titreService Orientation in Holonic and Multi-Agent Manufacturing Control
rédacteurs en chefTheodor Borangiu, Andre Thomas, Damien Trentesaux
Pages183-196
Nombre de pages14
Les DOIs
étatPublié - 20 avr. 2012

Série de publications

NomStudies in Computational Intelligence
Volume402
ISSN (imprimé)1860-949X

Empreinte digitale

Examiner les sujets de recherche de « Speech to head gesture mapping in multimodal human-robot interaction ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation