TY - GEN
T1 - Representing Shape Collections with Alignment-Aware Linear Models
AU - Loiseau, Romain
AU - Monnier, Tom
AU - Aubry, Mathieu
AU - Landrieu, Loic
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - In this paper,we revisit the classical representation of 3D point clouds as linear shape models. Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations of low-dimensional linear shape models. Each linear model is characterized by a shape prototype,a low-dimensional shape basis and two neural networks. The networks take as input a point cloud and predict the coordinates of a shape in the linear basis and the affine transformation which best approximate the input. Both linear models and neural networks are learned end-to-end using a single reconstruction loss. The main advantage of our approach is that,in contrast to many recent deep approaches which learn feature-based complex shape representations,our model is explicit and every operation occurs in 3D space. As a result,our linear shape models can be easily visualized and annotated,and failure cases can be visually understood. While our main goal is to introduce a compact and interpretable representation of shape collections,we show it leads to state of the art results for few-shot segmentation. Code and data are available at: https://romainloiseau.github.io/deep-linear-shapes
AB - In this paper,we revisit the classical representation of 3D point clouds as linear shape models. Our key insight is to leverage deep learning to represent a collection of shapes as affine transformations of low-dimensional linear shape models. Each linear model is characterized by a shape prototype,a low-dimensional shape basis and two neural networks. The networks take as input a point cloud and predict the coordinates of a shape in the linear basis and the affine transformation which best approximate the input. Both linear models and neural networks are learned end-to-end using a single reconstruction loss. The main advantage of our approach is that,in contrast to many recent deep approaches which learn feature-based complex shape representations,our model is explicit and every operation occurs in 3D space. As a result,our linear shape models can be easily visualized and annotated,and failure cases can be visually understood. While our main goal is to introduce a compact and interpretable representation of shape collections,we show it leads to state of the art results for few-shot segmentation. Code and data are available at: https://romainloiseau.github.io/deep-linear-shapes
UR - https://www.scopus.com/pages/publications/85125007297
U2 - 10.1109/3DV53792.2021.00112
DO - 10.1109/3DV53792.2021.00112
M3 - Conference contribution
AN - SCOPUS:85125007297
T3 - Proceedings - 2021 International Conference on 3D Vision, 3DV 2021
SP - 1044
EP - 1053
BT - Proceedings - 2021 International Conference on 3D Vision, 3DV 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th International Conference on 3D Vision, 3DV 2021
Y2 - 1 December 2021 through 3 December 2021
ER -