Environment exploration for object-based visual saliency learning

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Searching for objects in an indoor environment can be drastically improved if a task-specific visual saliency is available. We describe a method to incrementally learn such an object-based visual saliency directly on a robot, using an environment exploration mechanism. We first define saliency based on a geometrical criterion and use this definition to segment salient elements given an attentive but costly and restrictive observation of the environment. These elements are used to train a fast classifier that predicts salient objects given large-scale visual features. In order to get a better and faster learning, we use an exploration strategy based on intrinsic motivation to drive our displacement in order to get relevant observations. Our approach has been tested on a robot in indoor environments as well as on publicly available RGB-D images sequences. We demonstrate that the approach outperforms several state-of-the-art methods in the case of indoor object detection and that the exploration strategy can drastically decrease the time required for learning saliency.

Original languageEnglish
Title of host publication2016 IEEE International Conference on Robotics and Automation, ICRA 2016
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2303-2309
Number of pages7
ISBN (Electronic)9781467380263
DOIs
Publication statusPublished - 8 Jun 2016
Externally publishedYes
Event2016 IEEE International Conference on Robotics and Automation, ICRA 2016 - Stockholm, Sweden
Duration: 16 May 201621 May 2016

Publication series

NameProceedings - IEEE International Conference on Robotics and Automation
Volume2016-June
ISSN (Print)1050-4729

Conference

Conference2016 IEEE International Conference on Robotics and Automation, ICRA 2016
Country/TerritorySweden
CityStockholm
Period16/05/1621/05/16

Fingerprint

Dive into the research topics of 'Environment exploration for object-based visual saliency learning'. Together they form a unique fingerprint.

Cite this