Color and depth-based superpixels for background and object segmentation

Research output: Contribution to journalConference articlepeer-review

Abstract

We present an approach to multimodal semantic segmentation based on both color and depth information. Our goal is to build a semantic map containing high-level information, namely objects and background categories (carpet, parquet, walls ...). This approach was developed for the Panoramic and Active Camera for Object Mapping (PACOM) project in order to participate in a French exploration and mapping contest called CAROTTE. Our method is based on a structured output prediction strategy to detect the various elements of the environment, using both color and depth images from the Kinect camera. The image is first over-segmented into small homogeneous regions named "superpixels" to be classified and characterized using a bag of features representation. For each superpixel, texture and color descriptors are computed from the color image and 3D descriptors are computed from the associated depth image. A Markov Random Field (MRF) model then fuses texture, color, depth and neighboring information to associate a label to each superpixel extracted from the image. We present an evaluation of different segmentation algorithms for the semantic labeling task and the interest of integrating depth information in the superpixel computation task.

Original languageEnglish
Pages (from-to)1307-1315
Number of pages9
JournalProcedia Engineering
Volume41
DOIs
Publication statusPublished - 1 Jan 2012
Event2nd International Symposium on Robotics and Intelligent Sensors 2012, IRIS 2012 - Kuching, Sarawak, Malaysia
Duration: 4 Sept 20126 Sept 2012

Keywords

  • Image segmentation
  • Markov Random Field

Fingerprint

Dive into the research topics of 'Color and depth-based superpixels for background and object segmentation'. Together they form a unique fingerprint.

Cite this