Statistical gesture models for 3D motion capture from a library of gestures with variants

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

A challenge for 3D motion capture by monocular vision is 3D-2D projection ambiguities that may bring incorrect poses during tracking. In this paper, we propose improving 3D motion capture by learning human gesture models from a library of gestures with variants. This library has been created with virtual human animations. Gestures are described as Gaussian Process Dynamic Models (GPDM) and are used as constraints for motion tracking. Given the raw input poses from the tracker, the gesture model helps to correct ambiguous poses. The benefit of the proposed method is demonstrated with results.

Original languageEnglish
Title of host publicationGesture in Embodied Communication and Human-Computer Interaction - 8th International Gesture Workshop, GW 2009, Revised Selected Papers
Pages219-230
Number of pages12
DOIs
Publication statusPublished - 1 Dec 2009
Externally publishedYes
Event8th International Gesture Workshop: Gesture in Embodied Communication and Human-Computer Interaction, GW 2009 - Bielefeld, Germany
Duration: 25 Feb 200927 Feb 2009

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5934 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference8th International Gesture Workshop: Gesture in Embodied Communication and Human-Computer Interaction, GW 2009
Country/TerritoryGermany
CityBielefeld
Period25/02/0927/02/09

Keywords

  • 3D motion capture
  • Gaussian Process
  • Gesture library
  • Gesture model

Fingerprint

Dive into the research topics of 'Statistical gesture models for 3D motion capture from a library of gestures with variants'. Together they form a unique fingerprint.

Cite this