Abstract
An audio indexing system aims at describing audio content by identifying, labeling, or categorizing different acoustic events. Since the resulting audio classification and indexing is meant for direct human consumption, it is highly desirable that it produces perceptually relevant results. This can be obtained by integrating specific knowledge of the human auditory system in the design process to various extent. In this paper, we highlight some of the important concepts used in audio classification and indexing that are perceptually motivated or that exploit some principles of perception. In particular, we discuss several different strategies to integrate human perception, including: 1) the use of generic audition models; 2) the use of perceptually relevant features for the analysis stage that are perceptually justified either as a component of a hearing model or as being correlated with a perceptual dimension of sound similarity; and 3) the involvement of the user in the audio indexing or classification task. In this paper, we also illustrate some of the recent trends in semantic audio retrieval that approximate higher level perceptual processing and cognitive aspects of human audio recognition capabilities, including affect-based audio retrieval.
| Original language | English |
|---|---|
| Article number | 6560388 |
| Pages (from-to) | 1939-1954 |
| Number of pages | 16 |
| Journal | Proceedings of the IEEE |
| Volume | 101 |
| Issue number | 9 |
| DOIs | |
| Publication status | Published - 1 Jan 2013 |
Keywords
- Affect-based audio retrieval
- audio classification
- audio indexing
- music indexing
- music information retrieval
- musical timbre recognition
- perceptual audio features
- perceptual signal representations
- semantic audio retrieval