Multimodal information fusion for urban scene understanding

Philippe Xu, Franck Davoine, Jean Baptiste Bordes, Huijing Zhao, Thierry Denœux

Research output: Contribution to journalArticlepeer-review

Abstract

This paper addresses the problem of scene understanding for driver assistance systems. To recognize the large number of objects that may be found on the road, several sensors and decision algorithms have to be used. The proposed approach is based on the representation of all available information in over-segmented image regions. The main novelty of the framework is its capability to incorporate new classes of objects and to include new sensors or detection methods while remaining robust to sensor failures. Several classes such as ground, vegetation or sky are considered, as well as three different sensors. The approach was evaluated on real publicly available urban driving scene data.

Original languageEnglish
Pages (from-to)331-349
Number of pages19
JournalMachine Vision and Applications
Volume27
Issue number3
DOIs
Publication statusPublished - 1 Apr 2016
Externally publishedYes

Keywords

  • Dempster–Shafer theory
  • Driving scene understanding
  • Evidence theory
  • Information fusion
  • Intelligent vehicles
  • Theory of belief functions

Fingerprint

Dive into the research topics of 'Multimodal information fusion for urban scene understanding'. Together they form a unique fingerprint.

Cite this