TY - CHAP
T1 - STEM-JEPA
T2 - A JOINT-EMBEDDING PREDICTIVE ARCHITECTURE FOR MUSICAL STEM COMPATIBILITY ESTIMATION
AU - Riou, Alain
AU - Lattner, Stefan
AU - Hadjeres, Gaëtan
AU - Anslow, Michael
AU - Peeters, Geoffroy
N1 - Publisher Copyright:
© A. Riou, S. Lattner, G. Hadjeres, M. Anslow, G. Peeters.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - This paper explores the automated process of determin-ing stem compatibility by identifying audio recordings of single instruments that blend well with a given musical context. To tackle this challenge, we present Stem-JEPA, a novel Joint-Embedding Predictive Architecture (JEPA) trained on a multi-track dataset using a self-supervised learning approach. Our model comprises two networks: an encoder and a predictor, which are jointly trained to predict the embed-dings of compatible stems from the embeddings of a given context, typically a mix of several instruments. Training a model in this manner allows its use in es-timating stem compatibility—retrieving, aligning, or generating a stem to match a given mix—or for downstream tasks such as genre or key estimation, as the training paradigm requires the model to learn information related to timbre, harmony, and rhythm. We evaluate our model’s performance on a retrieval task on the MUSDB18 dataset, testing its ability to find the missing stem from a mix and through a subjective user study. We also show that the learned embeddings cap-ture temporal alignment information and, finally, evaluate the representations learned by our model on several downstream tasks, highlighting that they effectively cap-ture meaningful musical features.
AB - This paper explores the automated process of determin-ing stem compatibility by identifying audio recordings of single instruments that blend well with a given musical context. To tackle this challenge, we present Stem-JEPA, a novel Joint-Embedding Predictive Architecture (JEPA) trained on a multi-track dataset using a self-supervised learning approach. Our model comprises two networks: an encoder and a predictor, which are jointly trained to predict the embed-dings of compatible stems from the embeddings of a given context, typically a mix of several instruments. Training a model in this manner allows its use in es-timating stem compatibility—retrieving, aligning, or generating a stem to match a given mix—or for downstream tasks such as genre or key estimation, as the training paradigm requires the model to learn information related to timbre, harmony, and rhythm. We evaluate our model’s performance on a retrieval task on the MUSDB18 dataset, testing its ability to find the missing stem from a mix and through a subjective user study. We also show that the learned embeddings cap-ture temporal alignment information and, finally, evaluate the representations learned by our model on several downstream tasks, highlighting that they effectively cap-ture meaningful musical features.
UR - https://www.scopus.com/pages/publications/85206944792
M3 - Chapter
AN - SCOPUS:85206944792
T3 - Proceedings of the International Society for Music Information Retrieval Conference
SP - 625
EP - 633
BT - Proceedings of the International Society for Music Information Retrieval Conference
PB - International Society for Music Information Retrieval
ER -