Face Disentanglement via Latent Space Mapping
Event Type
Technical Papers
Technical Papers Q&A
Registration Categories
TimeSaturday, 12 December 202010:42 - 10:48 SGT
LocationZoom Room 5
DescriptionLearning disentangled representations of data is a fundamental problem in
artificial intelligence. Specifically, disentangled latent representations allow
generative models to control and compose the disentangled factors in the
synthesis process. Current methods, however, require extensive supervision
and training, or instead, noticeably compromise quality. In this paper, we
present a method that learns how to represent data in a disentangled way,
with minimal supervision, manifested solely using available pre-trained
networks. Our key insight is to decouple the processes of disentanglement
and synthesis, by employing a leading pre-trained unconditional image
generator, such as StyleGAN. By learning to map into its latent space, we
leverage both its state-of-the-art quality generative power, and its rich and
expressive latent space, without the burden of training it.

We demonstrate our approach on the complex and high dimensional
domain of human heads. We evaluate our method qualitatively and quantitatively,
and exhibit its success with de-identification operations and with
temporal identity coherency in image sequences. Through extensive experimentation,
we show that our method successfully disentangles identity
from other facial attributes, surpassing existing methods, even though they
require more training and supervision.