TY - GEN
T1 - IS3
T2 - 2025 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA 2025
AU - Berger, Clémentine
AU - Stamatiadis, Paraskevas
AU - Badeau, Roland
AU - Essid, Slim
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - We are interested in audio systems capable of performing a differentiated processing of stationary backgrounds and isolated acoustic events within an acoustic scene, whether for applying specific processing methods to each part or for focusing solely on one while ignoring the other. Such systems have applications in real-world scenarios, including robust adaptive audio rendering systems (e.g., EQ or compression), plosive attenuation in voice mixing, noise suppression or reduction, robust acoustic event classification or even bioacoustics. To this end, we introduce IS3, a neural network designed for Impulsive-Stationary Sound Separation, that isolates impulsive acoustic events from the stationary background using a deep filtering approach, that can act as a pre-processing stage for the above-mentioned tasks. To ensure optimal training, we propose a sophisticated data generation pipeline that curates and adapts existing datasets for this task. We demonstrate that a learning-based approach, build on a relatively lightweight neural architecture and trained with well-designed and varied data, is successful in this previously unaddressed task, outperforming the Harmonic-Percussive Sound Separation masking method, adapted from music signal processing research, and wavelet filtering on objective separation metrics.
AB - We are interested in audio systems capable of performing a differentiated processing of stationary backgrounds and isolated acoustic events within an acoustic scene, whether for applying specific processing methods to each part or for focusing solely on one while ignoring the other. Such systems have applications in real-world scenarios, including robust adaptive audio rendering systems (e.g., EQ or compression), plosive attenuation in voice mixing, noise suppression or reduction, robust acoustic event classification or even bioacoustics. To this end, we introduce IS3, a neural network designed for Impulsive-Stationary Sound Separation, that isolates impulsive acoustic events from the stationary background using a deep filtering approach, that can act as a pre-processing stage for the above-mentioned tasks. To ensure optimal training, we propose a sophisticated data generation pipeline that curates and adapts existing datasets for this task. We demonstrate that a learning-based approach, build on a relatively lightweight neural architecture and trained with well-designed and varied data, is successful in this previously unaddressed task, outperforming the Harmonic-Percussive Sound Separation masking method, adapted from music signal processing research, and wavelet filtering on objective separation metrics.
UR - https://www.scopus.com/pages/publications/105026952634
U2 - 10.1109/WASPAA66052.2025.11230927
DO - 10.1109/WASPAA66052.2025.11230927
M3 - Conference contribution
AN - SCOPUS:105026952634
T3 - IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
BT - Proceedings of the 2025 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, WASPAA 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 12 October 2025 through 15 October 2025
ER -