Abstract
In this article, we propose solutions to the problem of speaker diarization of TV talk-shows, a problem for which adapted multimodal approaches, relying on other streams of data than only audio, remain largely under exploited. Hence we propose an original system that leverages prior knowledge on the structure of this type of content, especially the visual information relating to the active speakers, for an improved diarization performance. The architecture of this system can be decomposed into two main stages. First a reliable training set is created, in an unsupervised fashion, for each participant of the TV program being processed. This data is assembled by the association of visual and audio descriptors carefully selected in a clustering cascade. Then, Support Vector Machines are used for the classification of the speech data (of a given TV program). The performance of this new architecture is assessed on two French talk-show collections: Le Grand Échiquier and On n'a pas tout dit. The results show that our new system outperforms state-of-the-art methods, thus evidencing the effectiveness of kernel-based methods, as well as visual cues, in multimodal approaches to speaker diarization of challenging contents such as TV talk-shows.
| Original language | English |
|---|---|
| Article number | 6380624 |
| Pages (from-to) | 509-520 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Multimedia |
| Volume | 15 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - 1 Apr 2013 |
| Externally published | Yes |
Keywords
- Fusion
- Joint audiovisual processing
- Multi-modality
- SVM classification
- Speaker diarization
- Talk-show
- Unsupervised learning
Fingerprint
Dive into the research topics of 'A multimodal approach to speaker diarization on TV talk-shows'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver