Information fusion on oversegmented images: An application for urban scene understanding

  • Philippe Xu
  • , Franck Davoine
  • , Jean Baptiste Bordes
  • , Huijing Zhao
  • , Thierry Denoeux

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The large number of tasks one may expect from a driver assistance system leads to consider many object classes in the neighborhood of the so-called intelligent vehicle. In order to get a correct understanding of the driving scene, one has to fuse all sources of information that can be made available. In this paper, an original fusion framework working on segments of over-segmented images and based on the theory of belief functions is presented. The problem is posed as an image labeling one. It will first be applied to ground detection using three kinds of sensors. We will show how the fusion framework is flexible enough to include new sensors as well as new classes of objects, which will be shown by adding sky and vegetation classes afterward. The work was validated on real and publicly available urban driving scene data.

Original languageEnglish
Title of host publicationProceedings of the 13th IAPR International Conference on Machine Vision Applications, MVA 2013
PublisherMVA Organization
Pages189-193
Number of pages5
ISBN (Print)9784901122139
Publication statusPublished - 1 Jan 2013
Externally publishedYes
Event13th IAPR International Conference on Machine Vision Applications, MVA 2013 - Kyoto, Japan
Duration: 20 May 201323 May 2013

Publication series

NameProceedings of the 13th IAPR International Conference on Machine Vision Applications, MVA 2013

Conference

Conference13th IAPR International Conference on Machine Vision Applications, MVA 2013
Country/TerritoryJapan
CityKyoto
Period20/05/1323/05/13

Fingerprint

Dive into the research topics of 'Information fusion on oversegmented images: An application for urban scene understanding'. Together they form a unique fingerprint.

Cite this