Feature Learning with Matrix Factorization Applied to Acoustic Scene Classification

Victor Bisot, Romain Serizel, Slim Essid, Gaël Richard

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we study the usefulness of various matrix factorization methods for learning features to be used for the specific acoustic scene classification (ASC) problem. A common way of addressing ASC has been to engineer features capable of capturing the specificities of acoustic environments. Instead, we show that better representations of the scenes can be automatically learned from time-frequency representations using matrix factorization techniques. We mainly focus on extensions including sparse, kernel-based, convolutive and a novel supervised dictionary learning variant of principal component analysis and nonnegative matrix factorization. An experimental evaluation is performed on two of the largest ASC datasets available in order to compare and discuss the usefulness of these methods for the task. We show that the unsupervised learning methods provide better representations of acoustic scenes than the best conventional hand-crafted features on both datasets. Furthermore, the introduction of a novel nonnegative supervised matrix factorization model and deep neural networks trained on spectrograms, allow us to reach further improvements.

Original languageEnglish
Pages (from-to)1216-1229
Number of pages14
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume25
Issue number6
DOIs
Publication statusPublished - 1 Jun 2017
Externally publishedYes

Keywords

  • Acoustic scene classification
  • feature learning
  • matrix factorization

Fingerprint

Dive into the research topics of 'Feature Learning with Matrix Factorization Applied to Acoustic Scene Classification'. Together they form a unique fingerprint.

Cite this