Evidential grammars: A compositional approach for scene understanding. Application to multimodal street data

Jean Baptiste Bordes, Franck Davoine, Philippe Xu, Thierry Denœux

Research output: Contribution to journalArticlepeer-review

Abstract

Automatic scene understanding from multimodal data is a key task in the design of fully autonomous vehicles. The theory of belief functions has proved effective for fusing information from several sensors at the superpixel level. Here, we propose a novel framework, called evidential grammars, which extends stochastic grammars by replacing probabilities by belief functions. This framework allows us to fuse local information with prior and contextual information, also modeled as belief functions. The use of belief functions in a compositional model is shown to allow for better representation of the uncertainty on the priors and for greater flexibility of the model. The relevance of our approach is demonstrated on multi-modal traffic scene data from the KITTI benchmark suite.

Original languageEnglish
Pages (from-to)1173-1185
Number of pages13
JournalApplied Soft Computing Journal
Volume61
DOIs
Publication statusPublished - 1 Dec 2017

Keywords

  • Belief functions
  • Computer vision
  • Dempster–Shafer theory
  • Machine learning

Fingerprint

Dive into the research topics of 'Evidential grammars: A compositional approach for scene understanding. Application to multimodal street data'. Together they form a unique fingerprint.

Cite this