TY - GEN
T1 - RGBD object recognition and visual texture classification for indoor semantic mapping
AU - Filliat, David
AU - Battesti, Emmanuel
AU - Bazeille, Stéphane
AU - Duceux, Guillaume
AU - Gepperth, Alexander
AU - Harrath, Lotfi
AU - Jebari, Islem
AU - Pereira, Rafael
AU - Tapus, Adriana
AU - Meyer, Cedric
AU - Ieng, Sio Hoi
AU - Benosman, Ryad
AU - Cizeron, Eddy
AU - Mamanna, Jean Charles
AU - Pothier, Benoit
PY - 2012/7/16
Y1 - 2012/7/16
N2 - We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments.
AB - We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments.
UR - https://www.scopus.com/pages/publications/84863702617
U2 - 10.1109/TePRA.2012.6215666
DO - 10.1109/TePRA.2012.6215666
M3 - Conference contribution
AN - SCOPUS:84863702617
SN - 9781467308557
T3 - 2012 IEEE Conference on Technologies for Practical Robot Applications, TePRA 2012
SP - 127
EP - 132
BT - 2012 IEEE Conference on Technologies for Practical Robot Applications, TePRA 2012
T2 - 2012 IEEE International Conference on Technologies for Practical Robot Applications, TePRA 2012
Y2 - 23 April 2012 through 24 April 2012
ER -