Real-time 3D motion capture by monocular vision and virtual rendering

Research output: Contribution to journalArticlepeer-review

Abstract

Networked 3D virtual environments allow multiple users to interact over the Internet by means of avatars and to get some feeling of a virtual telepresence. However, avatar control may be tedious. 3D sensors for motion capture systems based on 3D sensors have reached the consumer market, but webcams remain more widespread and cheaper. This work aims at animating a user’s avatar by real-time motion capture using a personal computer and a plain webcam. In a classical model-based approach, we register a 3D articulated upper-body model onto video sequences and propose a number of heuristics to accelerate particle filtering while robustly tracking user motion. Describing the body pose using wrists 3D positions rather than joint angles allows efficient handling of depth ambiguities for probabilistic tracking. We demonstrate experimentally the robustness of our 3D body tracking by real-time monocular vision, even in the case of partial occlusions and motion in the depth direction.

Original languageEnglish
Pages (from-to)839-858
Number of pages20
JournalMachine Vision and Applications
Volume28
Issue number8
DOIs
Publication statusPublished - 1 Nov 2017
Externally publishedYes

Keywords

  • 3D motion capture
  • 3D/2D registration
  • Monocular vision
  • Particle filtering
  • Real-time computer vision

Fingerprint

Dive into the research topics of 'Real-time 3D motion capture by monocular vision and virtual rendering'. Together they form a unique fingerprint.

Cite this