Utterance level feature aggregation with deep metric learning for speech emotion recognition

Research output: Contribution to journalArticlepeer-review

Abstract

Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various ap-plications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we introduce a novel speech emotion recognition method, based on the Squeeze and Excitation ResNet (SE-ResNet) model and fed with spectrogram inputs. In order to overcome the limitations of the state-of-the-art techniques, which fail in providing a robust feature representation at the utterance level, the CNN architecture is extended with a trainable discriminative GhostVLAD clustering layer that aggregates the audio features into compact, single-utterance vector representation. In addition, an end-to-end neural embedding approach is introduced, based on an emotionally constrained triplet loss function. The loss function integrates the relations between the various emotional patterns and thus improves the latent space data representation. The proposed methodology achieves 83.35% and 64.92% global accuracy rates on the RAVDESS and CREMA-D publicly available datasets, respectively. When compared with the results provided by human observers, the gains in global accuracy scores are superior to 24%. Finally, the objective comparative evaluation with state-of-the-art techniques demonstrates accuracy gains of more than 3%.

Original languageEnglish
Article number4233
JournalSensors (Switzerland)
Volume21
Issue number12
DOIs
Publication statusPublished - 2 Jun 2021

Keywords

  • Deep convolution neural networks
  • Emotion metric learn-ing
  • Speech emotion recognition
  • Utterance level feature aggregation

Fingerprint

Dive into the research topics of 'Utterance level feature aggregation with deep metric learning for speech emotion recognition'. Together they form a unique fingerprint.

Cite this