TY - GEN
T1 - TRANSFER LEARNING AND BIAS CORRECTION WITH PRE-TRAINED AUDIO EMBEDDINGS
AU - Wang, Changhong
AU - Richard, Gaël
AU - McFee, Brian
N1 - Publisher Copyright:
© Changhong Wang, Gaël Richard, and Brian McFee.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Deep neural network models have become the dominant approach to a large variety of tasks within music information retrieval (MIR). These models generally require large amounts of (annotated) training data to achieve high accuracy. Because not all applications in MIR have sufficient quantities of training data, it is becoming increasingly common to transfer models across domains. This approach allows representations derived for one task to be applied to another, and can result in high accuracy with less stringent training data requirements for the downstream task. However, the properties of pre-trained audio embeddings are not fully understood. Specifically, and unlike traditionally engineered features, the representations extracted from pre-trained deep networks may embed and propagate biases from the model's training regime. This work investigates the phenomenon of bias propagation in the context of pre-trained audio representations for the task of instrument recognition. We first demonstrate that three different pre-trained representations (VGGish, OpenL3, and YAMNet) exhibit comparable performance when constrained to a single dataset, but differ in their ability to generalize across datasets (OpenMIC and IRMAS). We then investigate dataset identity and genre distribution as potential sources of bias. Finally, we propose and evaluate post-processing countermeasures to mitigate the effects of bias, and improve generalization across datasets.
AB - Deep neural network models have become the dominant approach to a large variety of tasks within music information retrieval (MIR). These models generally require large amounts of (annotated) training data to achieve high accuracy. Because not all applications in MIR have sufficient quantities of training data, it is becoming increasingly common to transfer models across domains. This approach allows representations derived for one task to be applied to another, and can result in high accuracy with less stringent training data requirements for the downstream task. However, the properties of pre-trained audio embeddings are not fully understood. Specifically, and unlike traditionally engineered features, the representations extracted from pre-trained deep networks may embed and propagate biases from the model's training regime. This work investigates the phenomenon of bias propagation in the context of pre-trained audio representations for the task of instrument recognition. We first demonstrate that three different pre-trained representations (VGGish, OpenL3, and YAMNet) exhibit comparable performance when constrained to a single dataset, but differ in their ability to generalize across datasets (OpenMIC and IRMAS). We then investigate dataset identity and genre distribution as potential sources of bias. Finally, we propose and evaluate post-processing countermeasures to mitigate the effects of bias, and improve generalization across datasets.
M3 - Conference contribution
AN - SCOPUS:85209586575
T3 - 24th International Society for Music Information Retrieval Conference, ISMIR 2023 - Proceedings
SP - 64
EP - 70
BT - 24th International Society for Music Information Retrieval Conference, ISMIR 2023 - Proceedings
A2 - Sarti, Augusto
A2 - Antonacci, Fabio
A2 - Sandler, Mark
A2 - Bestagini, Paolo
A2 - Dixon, Simon
A2 - Liang, Beici
A2 - Richard, Gael
A2 - Pauwels, Johan
PB - International Society for Music Information Retrieval
T2 - 24th International Society for Music Information Retrieval Conference, ISMIR 2023
Y2 - 5 November 2023 through 9 November 2023
ER -