TY - GEN
T1 - Crafting a multi-task CNN for viewpoint estimation
AU - Massa, Francisco
AU - Marlet, Renaud
AU - Aubry, Mathieu
N1 - Publisher Copyright:
© 2016. The copyright of this document resides with its authors.
PY - 2016/1/1
Y1 - 2016/1/1
N2 - Convolutional Neural Networks (CNNs) were recently shown to provide state-of-the-art results for object category viewpoint estimation. However different ways of formulating this problem have been proposed and the competing approaches have been explored with very different design choices. This paper presents a comparison of these approaches in a unified setting as well as a detailed analysis of the key factors that impact performance. Followingly, we present a new joint training method with the detection task and demonstrate its benefit. We also highlight the superiority of classification approaches over regression approaches, quantify the benefits of deeper architectures and extended training data, and demonstrate that synthetic data is beneficial even when using ImageNet training data. By combining all these elements, we demonstrate an improvement of approximately 5% mAVP over previous state-of-the-art results on the Pascal3D+ dataset [29]. In particular for their most challenging 24 view classification task we improve the results from 31.1% to 36.1% mAVP.
AB - Convolutional Neural Networks (CNNs) were recently shown to provide state-of-the-art results for object category viewpoint estimation. However different ways of formulating this problem have been proposed and the competing approaches have been explored with very different design choices. This paper presents a comparison of these approaches in a unified setting as well as a detailed analysis of the key factors that impact performance. Followingly, we present a new joint training method with the detection task and demonstrate its benefit. We also highlight the superiority of classification approaches over regression approaches, quantify the benefits of deeper architectures and extended training data, and demonstrate that synthetic data is beneficial even when using ImageNet training data. By combining all these elements, we demonstrate an improvement of approximately 5% mAVP over previous state-of-the-art results on the Pascal3D+ dataset [29]. In particular for their most challenging 24 view classification task we improve the results from 31.1% to 36.1% mAVP.
UR - https://www.scopus.com/pages/publications/85047762877
U2 - 10.5244/C.30.91
DO - 10.5244/C.30.91
M3 - Conference contribution
AN - SCOPUS:85047762877
SN - 1901725596
T3 - British Machine Vision Conference 2016, BMVC 2016
SP - 91.1-91.12
BT - British Machine Vision Conference 2016, BMVC 2016
PB - British Machine Vision Conference, BMVC
T2 - 27th British Machine Vision Conference, BMVC 2016
Y2 - 19 September 2016 through 22 September 2016
ER -