Skip to content

Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"

License

Notifications You must be signed in to change notification settings

jixiaozhong/Sonic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

42 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Sonic

Sonic: Shifting Focus to Global Audio Perception in Portrait Animation

Demo Demo License

πŸ‘‹ Join our QQ Chat Group

πŸ”₯πŸ”₯πŸ”₯ NEWS

2025/02/08: Many thanks to the open-source community contributors for making the ComfyUI version of Sonic a reality. Your efforts are truly appreciated! ComfyUI version of Sonic

2025/02/06: Commercialization: Note that our license is non-commercial. If commercialization is required, please use Tencent Cloud Video Creation Large Model: Introduction / API documentation

2025/01/17: Our Online huggingface Demo is released.

2025/01/17: Thank you to NewGenAI for promoting our Sonic and creating a Windows-based tutorial on YouTube.

2024/12/16: Our Online Demo is released.

πŸŽ₯ Demo

Input Output Input Output
anime1.mp4
female_diaosu.mp4
hair.mp4
leonnado.mp4

For more visual demos, please visit our Page.

🧩 Community Contributions

If you develop/use Sonic in your projects, welcome to let us know.

πŸ“‘ Updates

2025/01/14: Our inference code and weights are released. Stay tuned, we will continue to polish the model.

πŸ“œ Requirements

  • An NVIDIA GPU with CUDA support is required.
    • The model is tested on a single 32G GPU.
  • Tested operating system: Linux

πŸ”‘ Inference

Installtion

  • install pytorch
  pip3 install -r requirements.txt
  • All models are stored in checkpoints by default, and the file structure is as follows
Sonic
  β”œβ”€β”€checkpoints
  β”‚  β”œβ”€β”€Sonic
  β”‚  β”‚  β”œβ”€β”€audio2bucket.pth
  β”‚  β”‚  β”œβ”€β”€audio2token.pth
  β”‚  β”‚  β”œβ”€β”€unet.pth
  β”‚  β”œβ”€β”€stable-video-diffusion-img2vid-xt
  β”‚  β”‚  β”œβ”€β”€...
  β”‚  β”œβ”€β”€whisper-tiny
  β”‚  β”‚  β”œβ”€β”€...
  β”‚  β”œβ”€β”€RIFE
  β”‚  β”‚  β”œβ”€β”€flownet.pkl
  β”‚  β”œβ”€β”€yoloface_v5m.pt
  β”œβ”€β”€...

Download by huggingface-cli follow

  python3 -m pip install "huggingface_hub[cli]"
  huggingface-cli download LeonJoe13/Sonic --local-dir  checkpoints
  huggingface-cli download stabilityai/stable-video-diffusion-img2vid-xt --local-dir  checkpoints/stable-video-diffusion-img2vid-xt
  huggingface-cli download openai/whisper-tiny --local-dir checkpoints/whisper-tiny

or manully download pretrain model, svd-xt and whisper-tiny to checkpoints/

Run demo

  python3 demo.py \
  '/path/to/input_image' \
  '/path/to/input_audio' \
  '/path/to/output_video'

πŸ”— Citation

If you find our work helpful for your research, please consider citing our work.

@article{ji2024sonic,
  title={Sonic: Shifting Focus to Global Audio Perception in Portrait Animation},
  author={Ji, Xiaozhong and Hu, Xiaobin and Xu, Zhihong and Zhu, Junwei and Lin, Chuming and He, Qingdong and Zhang, Jiangning and Luo, Donghao and Chen, Yi and Lin, Qin and others},
  journal={arXiv preprint arXiv:2411.16331},
  year={2024}
}

@article{ji2024realtalk,
  title={Realtalk: Real-time and realistic audio-driven face generation with 3d facial prior-guided identity alignment network},
  author={Ji, Xiaozhong and Lin, Chuming and Ding, Zhonggan and Tai, Ying and Zhu, Junwei and Hu, Xiaobin and Luo, Donghao and Ge, Yanhao and Wang, Chengjie},
  journal={arXiv preprint arXiv:2406.18284},
  year={2024}
}

πŸ“œ Related Works

Explore our related researches:

πŸ“ˆ Star History

Star History Chart

About

Official implementation of "Sonic: Shifting Focus to Global Audio Perception in Portrait Animation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages