Export3D: Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation ECCV 2024
- Taekyung Ki DeepBrain AI Inc.
- Dongchan Min Graduate School of AI, KAIST
- Gyeongsu Chae DeepBrain AI Inc.
Abstract
In this paper, we present Export3D, a one-shot 3D-aware portrait animation method that is able to control the facial expression and camera view of a given portrait image. To achieve this, we introduce a tri-plane generator that directly generates a tri-plane of 3D prior by transferring the expression parameter of 3DMM into the source image. The tri-plane is then decoded into the image of different view through a differentiable volume rendering. Existing portrait animation methods heavily rely on image warping to transfer the expression in the motion space, challenging on disentanglement of appearance and expression. In contrast, we propose a contrastive pre-training framework for appearance-free expression parameter, eliminating undesirable appearance swap when transferring a cross-identity expression. Extensive experiments show that our pre-training framework can learn the appearance-free expression representation hidden in 3DMM, and our model can generate 3D-aware expression controllable portrait image without appearance swap in the cross-identity manner.
One-shot Novel-view Animation Results
Our method can generate 3D-aware videos using a single source image. Here, we fix the expression and animate the source (the first row) by controlling the camera parameters (the second row).
Video 1. One-shot free view animation results on Voxceleb1 test data.
Video 2. One-shot free view animation results on VFHQ test data.
One-shot Reconstruction Results
Our method can faithfully reconstruct facial expressions (e.g., eye-blink, mouth motion) and head pose using a single reference image.
Here, the source images are the same as above. Note that for intuitive comparison, we incorporate the corresponding audio into the output videos.
First column: Groud Truth. Second column: Generated Result. Third column: Estimated Depth.
Video 3-6. One-shot Reconstruction on Voxceleb1 test.
Video 7-10. One-shot Reconstruction on VFHQ test.
One-shot Cross Identity Transfer Results
We provide cross expression+camera results with different identities in VFHQ. Our method can preserve the visual identity only with a single reference image (the first row) and the expression of the driving video (the first column).
Video 11. One-shot Cross Expression+Camera on VFHQ test data (1).
Video 12. One-shot Cross Expression+Camera on VFHQ test data (2).
Comparison with other Methods
Same-identity Transfer Results
Video 13a. Same-identity Comparision in VFHQ.
Video 13b. Same-identity Comparision in TalkingHead-1KH.
Video 13c. Same-identity Comparision in VFHQ.
Video 13d. Same-identity Comparision in TalkingHead-1KH.
Cross-identity Transfer Results
Video 14a. Cross-identity Comparision in VFHQ.
Video 14b. Cross-identity Comparision in TalkingHead-1KH.
Video 14c. Cross-identity Comparision in VFHQ.
Video 14d. Cross-identity Comparision in TalkingHead-1KH.
About Expression Controls
To explicitly control the facial expression of a given source image, we leverage 3DMM expression parameters. However, we observe that raw 3DMM expression parameters still contain the appearance information even in the low-dimensional space (t-SNE), resulting in undesirable appearance swaps, such as eye shape, head size, etc., when transferring cross-identity expression. To alleviate this issue, we propose a contrastive pre-training framework to extract the expression information hidden in the expression parameters. Further details about the framework are in our paper.
Video 15. Ablation Studies on Expression Encoding.
Expression Control along the Orthogonal Directions
At the same time, we enforce the orthogonal structure on them, representing different expressions along different basis vectors.
Video 16. Visualization of Expression Control along Orthogonal Directions.
Novel-view Synthesis Results with Expression Transfer
Our method can transfer the driving expression to the source image without any multi-view inconsistency in terms of lighting and texture.
Video 17. Results of Novel-view Synthesis with Expression Transfer.
Citation
Related Links
EG3D provides an excellent code base for our work.
The website template was borrowed from Michaƫl Gharbi, and Mip-NeRF.