Skip to content

Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)

Notifications You must be signed in to change notification settings

jjihwan/FIFO-Diffusion_public

Folders and files

NameName
Last commit message
Last commit date

Latest commit

b87ba5d Β· Oct 18, 2024

History

19 Commits
Jun 6, 2024
May 27, 2024
May 25, 2024
Jun 6, 2024
Jun 6, 2024
Jun 6, 2024
May 25, 2024
Jun 6, 2024
May 27, 2024
Oct 18, 2024
Jul 15, 2024
May 27, 2024
May 27, 2024

Repository files navigation

FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)

πŸ’Ύ VRAM < 10GB             πŸš€ Infinitely Long Videos            ⭐️ Tuning-free

     

πŸ“½οΈ See more video samples in our project page!

"An astronaut floating in space, high quality, 4K resolution.",

VideoCrafter2, 100 frames, 320 X 512 resolution

"A corgi vlogging itself in tropical Maui."

Open-Sora Plan, 512 X 512 resolution

News πŸ“°

[2024.09.26] πŸ‘πŸ‘πŸ‘ Our paper has been accepted to NeurIPS 2024!!!

[2024.06.06] πŸ”₯πŸ”₯πŸ”₯ We are excited to release the code for Open-Sora Plan v1.1.0. Thanks to the authors for open-sourcing the awesome baseline!

[2024.05.25] πŸ₯³πŸ₯³πŸ₯³ We are thrilled to present our official PyTorch implementation for FIFO-Diffusion. We are releasing the code based on VideoCrafter2.

[2024.05.19] πŸš€πŸš€πŸš€ Our paper, FIFO-Diffusion: Generating Infinite Videos from Text without Training, has been archived.

Clone our repository

git clone git@github.com:jjihwan/FIFO-Diffusion_public.git
cd FIFO-Diffusion_public

β˜€οΈ Start with VideoCrafter

1. Environment Setup βš™οΈ (python==3.10.14 recommended)

python3 -m venv .fifo
source .fifo/bin/activate

pip install -r requirements.txt

2.1 Download the models from Hugging FaceπŸ€—

Model Resolution Checkpoint
VideoCrafter2 (Text2Video) 320x512 Hugging Face

2.2 Set file structure

Store them as following structure:

cd FIFO-Diffusion_public
    .
    └── videocrafter_models
        └── base_512_v2
            └── model.ckpt      # VideoCrafter2 checkpoint

3.1. Run with VideoCrafter2 (Single GPU)

Requires less than 9GB VRAM with Titan XP.

python3 videocrafter_main.py --save_frames

3.2. Distributed Parallel inference with VideoCrafter2 (Multiple GPUs)

May consume slightly more memory than the single GPU inference (11GB with Titan XP). Please note that our implementation for parallel inference might not be optimal. Pull requests are welcome! πŸ€“

python3 videocrafter_main_mp.py --num_gpus 8 --save_frames

3.3. Multi-prompt generation

Coming soon.

β˜€οΈ Start with Open-Sora Plan v1.1.0

For simple implementation, we use the DDPM scheduler for Open-Sora Plan v1.1.0. Since Open-Sora Plan recommends using the PNDM scheduler, the results might not show the optimal performance. Multi-processing (parallelizable inference) and adapting PNDM scheduler are our next plan.

1. Environment Setup βš™οΈ (python==3.10.14 recommended)

cd FIFO-Diffusion_public
git clone git@github.com:PKU-YuanGroup/Open-Sora-Plan.git

python -m venv .sora
source .sora/bin/activate

cd Open-Sora-Plan
pip install -e .

pip install deepspeed

2. Run with Open-Sora Plan v1.1.0, 65x512x512 model

Requires about 40GB VRAM with A6000. It uses n=8 by default.

sh scripts/opensora_fifo_65.sh

3. Run with Open-Sora Plan v1.1.0, 221x512x512 model

Requires about 40GB VRAM with A6000. It uses n=4 by default.

sh scripts/opensora_fifo_221.sh

4. Distributed Parallel inference with Open-Sora Plan (WIP)

Comming soon.

Star History

Star History Chart

πŸ˜† Citation

@inproceedings{kim2024fifo,
	title = {FIFO-Diffusion: Generating Infinite Videos from Text without Training},
	author = {Jihwan Kim and Junoh Kang and Jinyoung Choi and Bohyung Han},
	booktitle = {NeurIPS},
	year = {2024},
}

πŸ€“ Acknowledgements

Our codebase builds on VideoCrafter, Open-Sora Plan, zeroscope. Thanks to the authors for sharing their awesome codebases!

About

Official implementation of FIFO-Diffusion: Generating Infinite Videos from Text without Training (NeurIPS 2024)

Resources

Stars

Watchers

Forks

Packages

No packages published