Learning structure-from-motion from motion

Clément Pinard, Laure Chevalley, Antoine Manzanera, David Filliat

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This work is based on a questioning of the quality metrics used by deep neural networks performing depth prediction from a single image, and then of the usability of recently published works on unsupervised learning of depth from videos. These works are all predicting depth from a single image, thus it is only known up to an undetermined scale factor, which is not sufficient for practical use cases that need an absolute depth map, i.e. the determination of the scaling factor. To overcome these limitations, we propose to learn in the same unsupervised manner a depth map inference system from monocular videos that takes a pair of images as input. This algorithm actually learns structure-from-motion from motion, and not only structure from context appearance. The scale factor issue is explicitly treated, and the absolute depth map can be estimated from camera displacement magnitude, which can be easily measured from cheap external sensors. Our solution is also much more robust with respect to domain variation and adaptation via fine tuning, because it does not rely entirely on depth from context. Two use cases are considered, unstabilized moving camera videos, and stabilized ones. This choice is motivated by the UAV (for Unmanned Aerial Vehicle) use case that generally provides reliable orientation measurement. We provide a set of experiments showing that, used in real conditions where only speed can be known, our network outperforms competitors for most depth quality measures. Results are given on the well known KITTI dataset [5], which provides robust stabilization for our second use case, but also contains moving scenes which are very typical of the in-car road context. We then present results on a synthetic dataset that we believe to be more representative of typical UAV scenes. Lastly, we present two domain adaptation use cases showing superior robustness of our method compared to single view depth algorithms, which indicates that it is better suited for highly variable visual contexts.

Original languageEnglish
Title of host publicationComputer Vision – ECCV 2018 Workshops, Proceedings
EditorsLaura Leal-Taixé, Stefan Roth
PublisherSpringer Verlag
Pages363-376
Number of pages14
ISBN (Print)9783030110147
DOIs
Publication statusPublished - 1 Jan 2019
Externally publishedYes
Event15th European Conference on Computer Vision, ECCV 2018 - Munich, Germany
Duration: 8 Sept 201814 Sept 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11131 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference15th European Conference on Computer Vision, ECCV 2018
Country/TerritoryGermany
CityMunich
Period8/09/1814/09/18

Fingerprint

Dive into the research topics of 'Learning structure-from-motion from motion'. Together they form a unique fingerprint.

Cite this