TY - GEN
T1 - 3D active shape model for automatic facial landmark location trained with automatically generated landmark points
AU - Zhou, Dianle
AU - Petrovska-Delacrétaz, Dijana
AU - Dorizzi, Bernadette
PY - 2010/11/18
Y1 - 2010/11/18
N2 - In this paper, a 3D Active Shape Model (3DASM) algorithm is presented to automatically locate facial landmarks from different views. The 3DASM is trained by setting different shape and texture parameters of 3D Morphable Model (3DMM). Using 3DMM to synthesize training data offers us two advantages: first, few manual operations are need, except labeling landmarks on the mean face of 3DMM. Second, since the learning data are directly from 3DMM, landmarks have one to one correspondence between the 2D points detected from the image and 3D points on 3DMM. This kind of correspondence will benefit 3D face reconstruction processing. During fitting, 3D rotation parameters are added comparing to 2D Active Shape Model (ASM). So we separate shape variations into intrinsic change (caused by the character of different person) and extrinsic change (caused by model projection). The experimental results show that our method is robust to pose variation.
AB - In this paper, a 3D Active Shape Model (3DASM) algorithm is presented to automatically locate facial landmarks from different views. The 3DASM is trained by setting different shape and texture parameters of 3D Morphable Model (3DMM). Using 3DMM to synthesize training data offers us two advantages: first, few manual operations are need, except labeling landmarks on the mean face of 3DMM. Second, since the learning data are directly from 3DMM, landmarks have one to one correspondence between the 2D points detected from the image and 3D points on 3DMM. This kind of correspondence will benefit 3D face reconstruction processing. During fitting, 3D rotation parameters are added comparing to 2D Active Shape Model (ASM). So we separate shape variations into intrinsic change (caused by the character of different person) and extrinsic change (caused by model projection). The experimental results show that our method is robust to pose variation.
U2 - 10.1109/ICPR.2010.926
DO - 10.1109/ICPR.2010.926
M3 - Conference contribution
AN - SCOPUS:78149472490
SN - 9780769541099
T3 - Proceedings - International Conference on Pattern Recognition
SP - 3801
EP - 3805
BT - Proceedings - 2010 20th International Conference on Pattern Recognition, ICPR 2010
T2 - 2010 20th International Conference on Pattern Recognition, ICPR 2010
Y2 - 23 August 2010 through 26 August 2010
ER -