Return to Research

Robust Face Frontalization

Face Frontalization Based on Robustly Fitting a Deformable Shape Model to 3D Landmarks
Zhiqi Kang, Mostafa Sadeghi, and Radu Horaud
Submitted to IEEE Transactions on Multimedia (arXivBibTex

Robust face frontalization (RFF) pipeline. 3D landmarks are extracted from both an arbitrarly-viewed input face and from a frontally-viewed deformable shape model. These landmarks are robustly aligned, thus enabling to compute the pose of the input face and to frontalize the input landmarks. The deformable shape model is next fitted to these landmarks, thus obtaining a frontalized shape model, followed by the computation of a frontal dense depth map; the latter is obtained by interpolation of the 3D vertices of the triangulated mesh that describes the shape model. Finally, the input-face pixels are warped onto the output-image pixels.

Abstract. Face frontalization consists of synthesizing a frontally-viewed face from an arbitrarily-viewed one. The main contribution of this paper is a robust face alignment method that enables pixel-to-pixel warping. The method simultaneously estimates the rigid transformation (scale, rotation, and translation) and the non-rigid deformation between two 3D point sets: a set of 3D landmarks extracted from an arbitrary-viewed face, and a set of 3D landmarks parameterized by a frontally-viewed deformable face model. An important merit of the proposed method is its ability to deal both with noise (small perturbations) and with outliers (large errors). We propose to model inliers and outliers with the generalized Student’s t-probability distribution function — a heavy-tailed distribution that is immune to non-Gaussian errors in the data. We describe in detail the associated expectation-maximization (EM) algorithm that alternates between the estimation of (i) the rigid parameters, (ii) the deformation parameters, and (iii) the t-distribution parameters. We also propose to use the zero-mean normalized cross-correlation score, between a frontalized face and the corresponding ground-truth frontally-viewed face,  to evaluate the performance of frontalization. To this end, we use a dataset that contains pairs of profile-viewed and frontally-viewed faces. This evaluation, based on direct image-to-image comparison, stands in contrast with indirect evaluation, based on analyzing the effect of frontalization on face recognition.