TY - JOUR
T1 - A multi-modal dance corpus for research into interaction between humans in virtual environments
AU - Essid, Slim
AU - Lin, Xinyu
AU - Gowing, Marc
AU - Kordelas, Georgios
AU - Aksay, Anil
AU - Kelly, Philip
AU - Fillon, Thomas
AU - Zhang, Qianni
AU - Dielmann, Alfred
AU - Kitanovski, Vlado
AU - Tournemenne, Robin
AU - Masurelle, Aymeric
AU - Izquierdo, Ebroul
AU - O'Connor, Noel E.
AU - Daras, Petros
AU - Richard, Gaël
PY - 2013/1/1
Y1 - 2013/1/1
N2 - We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology is locally available to them, can learn choreographies with teacher guidance in an online virtual dance studio. As the dance corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers performs a number of fixed choreographies, which are graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. The total duration of the recorded content is 1 h and 40 min for each single sensor, amounting to 55 h of recordings across all sensors. Although the dance corpus is tailored specifically for an online dance class application scenario, the data is free to download and use for any research and development purposes.
AB - We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology is locally available to them, can learn choreographies with teacher guidance in an online virtual dance studio. As the dance corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers performs a number of fixed choreographies, which are graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. The total duration of the recorded content is 1 h and 40 min for each single sensor, amounting to 55 h of recordings across all sensors. Although the dance corpus is tailored specifically for an online dance class application scenario, the data is free to download and use for any research and development purposes.
KW - Activity recognition
KW - Audio
KW - Computer vision
KW - Dance
KW - Depth maps
KW - Inertial sensors
KW - Machine listening
KW - Motion
KW - Multimodal data
KW - Multiview video processing
KW - Synchronisation
KW - Virtual reality
U2 - 10.1007/s12193-012-0109-5
DO - 10.1007/s12193-012-0109-5
M3 - Article
AN - SCOPUS:84874779590
SN - 1783-7677
VL - 7
SP - 157
EP - 170
JO - Journal on Multimodal User Interfaces
JF - Journal on Multimodal User Interfaces
IS - 1-2
ER -