GANFIT: Generative adversarial network fitting for high fidelity 3D face reconstruction

Gecer, Baris, Ploumpis, Stylianos, Kotsia, Irene ORCID logoORCID: https://orcid.org/0000-0002-3716-010X and Zafeiriou, Stefanos (2019) GANFIT: Generative adversarial network fitting for high fidelity 3D face reconstruction. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). In: 2019 IEEE/CVF International Conference on Computer Vision and Pattern Recognition, 16-20 Jun 2019, Long Beach, California. e-ISBN 9781728132938, pbk-ISBN 9781728132945. ISSN 1063-6919 [Conference or Workshop Item] (doi:10.1109/CVPR.2019.00125)

[img]
Preview
PDF - Final accepted version (with author's formatting)
Download (10MB) | Preview

Abstract

In the past few years, a lot of work has been done to- wards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modeling textures in high fidelity. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful generator of facial texture in UV space. Then, we revisit the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.

Item Type: Conference or Workshop Item (Paper)
Research Areas: A. > School of Science and Technology > Computer Science
Item ID: 26523
Notes on copyright: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Useful Links:
Depositing User: Irene Kotsia
Date Deposited: 02 May 2019 08:19
Last Modified: 29 Nov 2022 19:02
URI: https://eprints.mdx.ac.uk/id/eprint/26523

Actions (login required)

View Item View Item

Statistics

Activity Overview
6 month trend
295Downloads
6 month trend
256Hits

Additional statistics are available via IRStats2.