Abstract
In recent years, research in speaker verification has expended from using only the acoustic content of speech to trying to utilise high level features of information, such as linguistic content, pronunciation and idiolectal word usage. Phone based models have been shown to be promising for speaker verification, but they require transcribed speech data in the training phase. The present paper describes a segmental Gaussian Mixture Models (GMM) for text-independent speaker verification system based on data-driven Automatic Language Independent Speech Processing (ALISP). This system uses GMMs on a segmental level in order to exploit the different amount of discrimination provided by the ALISP classes. We compared the segmental ALISP-based GMM method with a baseline global GMM system. Results obtained for the NIST 2004 Speaker Recognition Evaluation data showed that the segmental approach outperforms the baseline system. It showed also that not all of the ALISP units are contributing to the discrimination between speakers.
| Original language | English |
|---|---|
| Pages (from-to) | 580-587 |
| Number of pages | 8 |
| Journal | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
| Volume | 3546 |
| DOIs | |
| Publication status | Published - 1 Jan 2005 |
| Externally published | Yes |
| Event | 5th International Conference on Audio - and Video-Based Biometric Person Authentication, AVBPA 2005 - Hilton Rye Town, NY, United States Duration: 20 Jul 2005 → 22 Jul 2005 |