DiffDub: Person-generic visual dubbing using inpainting renderer with diffusion auto-encoder


Tao Liu1, Chenpeng Du1, Shuai Fan2, Feilong Chen2, Kai Yu1,

1 MoE Key Lab of Artificial Intelligence, AI Institute, X-LANCE Lab, Shanghai Jiao Tong University
2 AISpeech Ltd, Suzhou China

Abstract


Generating high-quality and person-generic visual dubbing remains a challenge. Recent innovation has seen the advent of a two-stage paradigm, decoupling the rendering and lip synchronization process facilitated by intermediate representation as a conduit. Still, previous methodologies rely on rough landmarks or are confined to a single speaker, thus limiting their performance. In this paper, we propose DiffDub: Diffusion-based dubbing. We first craft the Diffusion auto-encoder by an inpainting renderer incorporating a mask to delineate editable zones and unaltered regions. This allows for seamless filling of the lower-face region while preserving the remaining parts. Throughout our experiments, we encountered several challenges. Primarily, the semantic encoder lacks robustness, constricting its ability to capture high-level features. Besides, the modeling ignored facial positioning, causing mouth or nose jitters across frames. To tackle these issues, we employ versatile strategies, including data augmentation and supplementary eye guidance. Moreover, we encapsulated a conformer-based reference encoder and motion generator fortified by a cross-attention mechanism. This enables our model to learn person-specific textures with varying references and reduces reliance on paired audio-visual data. Our rigorous experiments comprehensively highlight that our ground-breaking approach outpaces existing methods with considerable margins and delivers seamless, intelligible videos in person-generic and multilingual scenarios.



Method Comparsion on HDTF Reconstrution


Ground-truth Wav2lip PC-AVS LP-LAP DAE-Talker Ours
Video 1
Video 2
Video 3
Video 4
Video 5
More Videos

Method Comparsion on One-shot Dubbing


Portrait Wav2lip PC-AVS LP-LAP DAE-Talker Ours
Video 1
Video 2
Video 3
Video 4
Video 5
More Videos

Method Comparsion on Few-shot Dubbing


Wav2lip PC-AVS LP-LAP DAE-Talker Ours
Video 1
Video 2
Video 3
Video 4
Video 5
More Videos

Ablation Study


Ground-truth w/o eye gudiance w/o image aug. w/o w.s. 10% paired data Ours
Video 1
Video 2
Video 3
Video 4
Video 5
More Videos

Statement: The video sources on this page are from HDTF and DINet, and are solely intended for academic demonstration purposes without any other offensive intent.


Citation


@inproceedings{liu2024diffdub,
  title={DiffDub: Person-Generic Visual Dubbing Using Inpainting Renderer with Diffusion Auto-Encoder},
  author={Liu, Tao and Du, Chenpeng and Fan, Shuai and Chen, Feilong and Yu, Kai},
  booktitle={ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  pages={3630--3634},
  year={2024},
  organization={IEEE}
}