Probabilistic dance performance alignment by fusion of multimodal features

  • Angelique Dremeau
  • , Slim Essid

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper presents a probabilistic framework for the multimodal alignment of dance movements. The approach is based on a Hidden Markov Model (HMM) and considers different feature functions, each corresponding to a particular modality, namely motion features, extracted from depth maps, and audio features, extracted from audio recordings of dancers' steps. We show that this approach allows performing accurate dancer alignment, while constituting a general framework for various multimodal alignment tasks.

Original languageEnglish
Title of host publication2013 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013 - Proceedings
Pages3642-3646
Number of pages5
DOIs
Publication statusPublished - 18 Oct 2013
Externally publishedYes
Event2013 38th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013 - Vancouver, BC, Canada
Duration: 26 May 201331 May 2013

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Conference

Conference2013 38th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2013
Country/TerritoryCanada
CityVancouver, BC
Period26/05/1331/05/13

Keywords

  • Hidden Markov Model
  • Multimodal alignment
  • dance gestures

Fingerprint

Dive into the research topics of 'Probabilistic dance performance alignment by fusion of multimodal features'. Together they form a unique fingerprint.

Cite this