Unsupervised Performance Analysis of 3D Face Alignment with a Statistically Robust Confidence Test

by Mostafa Sadeghi,  Xavier Alameda-Pineda and Radu Horaud
Neurocomputing, volume 564, January 2024
[Code & Data]

Left: 3D landmarks predicted with a deep face alignment architecture; Right: The predicted landmarks are rigidly mapped onto a frontal pose (big green dots) and they are overlapped onto a frontal landmark model, or a shape atlas, composed of 3D ellipsoids centered at neutral landmark locations (small red dots). This enables to verify whether tested landmarks (left) lie within ellipsoidal-shaped volumes of confidence (right). The elongation of these ellipsoids corresponds to variabilities due to non-rigid facial deformations.

Abstract: We address the problem of analyzing the performance of 3D face alignment (3DFA), or facial landmark localization. Performance analysis is usually based on annotated datasets. Nevertheless, in the particular case of 3DFA, the annotation process is rarely error-free, which strongly biases the analysis. Alternatively, we investigate unsupervised performance analysis (UPA). The core ingredient of the proposed methodology is the robust estimation of the rigid transformation between predicted landmarks and model landmarks. We show that the rigid mapping thus computed is affected neither by non-rigid facial deformations, due to variabilities in expression and in identity, nor by landmark localization errors, due to various perturbations. The guiding idea is to apply the estimated rotation, translation and scale in order to bring the predicted landmarks in a mathematical home for the shape embedded in these landmarks (including possible errors). UPA proceeds as follows: (i) 3D landmarks are extracted from a 2D face using the 3DFA method under investigation; (ii) these landmarks are rigidly mapped onto a canonical (frontal) pose, and (iii) a statistically-robust confidence score is computed for each landmark. This allows to assess whether mapped landmarks lie inside (inliers) or outside (outliers) a confidence volume. We describe in detail an experimental evaluation protocol that uses publicly available datasets and several 3DFA software packages associated with published articles. The results show that the proposed analysis is consistent with supervised metrics and that it can be used to measure the accuracy of both predicted landmarks and of automatically annotated 3DFA datasets, to detect errors and to eliminate them.

Comments are closed.