Improving speaker verification using ALISP-based specific GMMs

Asmaa El Hannani, Dijana Petrovska-Delacrétaz

Research output: Contribution to journalConference articlepeer-review

Abstract

In recent years, research in speaker verification has expended from using only the acoustic content of speech to trying to utilise high level features of information, such as linguistic content, pronunciation and idiolectal word usage. Phone based models have been shown to be promising for speaker verification, but they require transcribed speech data in the training phase. The present paper describes a segmental Gaussian Mixture Models (GMM) for text-independent speaker verification system based on data-driven Automatic Language Independent Speech Processing (ALISP). This system uses GMMs on a segmental level in order to exploit the different amount of discrimination provided by the ALISP classes. We compared the segmental ALISP-based GMM method with a baseline global GMM system. Results obtained for the NIST 2004 Speaker Recognition Evaluation data showed that the segmental approach outperforms the baseline system. It showed also that not all of the ALISP units are contributing to the discrimination between speakers.

Original languageEnglish
Pages (from-to)580-587
Number of pages8
JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3546
DOIs
Publication statusPublished - 1 Jan 2005
Externally publishedYes
Event5th International Conference on Audio - and Video-Based Biometric Person Authentication, AVBPA 2005 - Hilton Rye Town, NY, United States
Duration: 20 Jul 200522 Jul 2005

Fingerprint

Dive into the research topics of 'Improving speaker verification using ALISP-based specific GMMs'. Together they form a unique fingerprint.

Cite this