TY - GEN
T1 - Correcting motion distortions in time-of-flight imaging
AU - Fülöp-Balogh, Beatrix Emőke
AU - Bonneel, Nicolas
AU - Digne, Julie
N1 - Publisher Copyright:
© 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2018/12/10
Y1 - 2018/12/10
N2 - Time-of-flight point cloud acquisition systems have grown in precision and robustness over the past few years. However, even subtle motion can induce significant distortions due to the long acquisition time. In contrast, there exists sensors that produce depth maps at a higher frame rate, but they suffer from low resolution and accuracy. In this paper, we correct distortions produced by small motions in time-of-flight acquisitions and even output a corrected animated sequence by combining a slow but high-resolution time-of-flight LiDAR system and a fast but low-resolution consumer depth sensor. We cast the problem as a curve-to-volume registration, by seeing a LiDAR point cloud as a curve in a 4-dimensional spacetime and the captured low-resolution depth video as a 4-dimensional spacetime volume. Our approach starts by registering both captured sequences in 4D, in a coarse-to-fine approach. It then computes an optical flow between the low-resolution frames and finally transfers high-resolution details by advecting along the flow. We demonstrate the efficiency of our approach on both synthetic data, on which we can compute registration errors, and real data.
AB - Time-of-flight point cloud acquisition systems have grown in precision and robustness over the past few years. However, even subtle motion can induce significant distortions due to the long acquisition time. In contrast, there exists sensors that produce depth maps at a higher frame rate, but they suffer from low resolution and accuracy. In this paper, we correct distortions produced by small motions in time-of-flight acquisitions and even output a corrected animated sequence by combining a slow but high-resolution time-of-flight LiDAR system and a fast but low-resolution consumer depth sensor. We cast the problem as a curve-to-volume registration, by seeing a LiDAR point cloud as a curve in a 4-dimensional spacetime and the captured low-resolution depth video as a 4-dimensional spacetime volume. Our approach starts by registering both captured sequences in 4D, in a coarse-to-fine approach. It then computes an optical flow between the low-resolution frames and finally transfers high-resolution details by advecting along the flow. We demonstrate the efficiency of our approach on both synthetic data, on which we can compute registration errors, and real data.
KW - 3D Video
KW - Detail transfer
KW - Dynamic Point Sets
UR - https://www.scopus.com/pages/publications/85061824546
U2 - 10.1145/3274247.3274512
DO - 10.1145/3274247.3274512
M3 - Conference contribution
AN - SCOPUS:85061824546
T3 - Proceedings - MIG 2018: ACM SIGGRAPH Conference on Motion, Interaction, and Games
BT - Proceedings - MIG 2018
A2 - Spencer, Stephen N.
PB - Association for Computing Machinery, Inc
T2 - 11th Annual International Conference on Motion, Interaction and Games, MIG 2018
Y2 - 8 November 2018 through 10 November 2018
ER -