Animation synthesis triggered by vocal mimics

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We propose a method leveraging the naturally time-related expressivity of our voice to control an animation composed of a set of short events. The user records itself mimicking onomatopoeia sounds such as "Tick", "Pop", or "Chhh" which are associated with specific animation events. The recorded soundtrack is automatically analyzed to extract every instant and types of sounds. We finally synthesize an animation where each event type and timing correspond with the soundtrack. In addition to being a natural way to control animation timing, we demonstrate that multiple stories can be efficiently generated by recording different voice sequences. Also, the use of more than one soundtrack allows us to control different characters with overlapping actions.

Original languageEnglish
Title of host publicationProceedings - MIG 2019
Subtitle of host publicationACM Conference on Motion, Interaction, and Games
EditorsStephen N. Spencer
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450369947
DOIs
Publication statusPublished - 28 Oct 2019
Event2019 ACM Conference on Motion, Interaction, and Games, MIG 2019 - Newcastle upon Tyne, United Kingdom
Duration: 28 Oct 201930 Oct 2019

Publication series

NameProceedings - MIG 2019: ACM Conference on Motion, Interaction, and Games

Conference

Conference2019 ACM Conference on Motion, Interaction, and Games, MIG 2019
Country/TerritoryUnited Kingdom
CityNewcastle upon Tyne
Period28/10/1930/10/19

Keywords

  • Sound-Driven Animation
  • Timing
  • Voice

Fingerprint

Dive into the research topics of 'Animation synthesis triggered by vocal mimics'. Together they form a unique fingerprint.

Cite this