Skip to content

Latest commit

 

History

History
48 lines (34 loc) · 1.49 KB

README.md

File metadata and controls

48 lines (34 loc) · 1.49 KB

Updates 🔥🔥🔥

We have released the Gradio demo for Hybrid (Trajectory + Landmark) Controls HERE!

Environment Setup

pip install -r requirements.txt

Download checkpoints

  1. Download the pretrained checkpoints of SVD_xt from huggingface to ./ckpts.

  2. Download the checkpoint of MOFA-Adapter from huggingface to ./ckpts.

  3. Download the checkpoint of CMP from here and put it into ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints.

The final structure of checkpoints should be:

./ckpts/
|-- controlnet
|   |-- config.json
|   `-- diffusion_pytorch_model.safetensors
|-- stable-video-diffusion-img2vid-xt-1-1
|   |-- feature_extractor
|       |-- ...
|   |-- image_encoder
|       |-- ...
|   |-- scheduler
|       |-- ...
|   |-- unet
|       |-- ...
|   |-- vae
|       |-- ...
|   |-- svd_xt_1_1.safetensors
|   `-- model_index.json

Run Gradio Demo

python run_gradio.py

🪄🪄🪄 The Gradio Interface is displayed as below. Please refer to the instructions on the gradio interface during the inference process!