-
Prerequisites
Describe the problemHey! Thank you for you work and this fork! So maybe do I need to do some extra step after installing your fork to enable this feature? Full console log outputD:\Fooocus_mashb1t_win64_2-1-864>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --listen --always-normal-vram --theme dark --preview-option fast
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py', '--listen', '--always-normal-vram', '--theme', 'dark', '--preview-option', 'fast']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.864
Total VRAM 8192 MB, total RAM 14188 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 Laptop GPU : native
VAE dtype: torch.bfloat16
Using pytorch cross attention
Refiner unloaded.
Running on local URL: http://0.0.0.0:7865
model_type EPS
UNet ADM Dimension 2816
To create a public link, set `share=True` in `launch()`.
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors].
Loaded LoRA [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [D:\Fooocus_mashb1t_win64_2-1-864\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.85 seconds
Started worker with PID 38792
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865 VersionFooocus 2.1.864 Where are you running Fooocus?Locally Operating SystemWindows 11 What browsers are you seeing the problem on?Chrome |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Thanks for the report. This is indeed correct, as branch https://github.com/mashb1t/Fooocus/tree/feature/add-lcm-realtime-canvas has never been merged to develop/main due to not reaching performance goals aka. being slow (3s per image) compared to other alternatives with direct processing and no queue inbetween Frontend and GenAI (0,2-0,4s) => branch is stale, but fully functional. |
Beta Was this translation helpful? Give feedback.
-
@apximax converted to discussion |
Beta Was this translation helpful? Give feedback.
Thanks for the report. This is indeed correct, as branch https://github.com/mashb1t/Fooocus/tree/feature/add-lcm-realtime-canvas has never been merged to develop/main due to not reaching performance goals aka. being slow (3s per image) compared to other alternatives with direct processing and no queue inbetween Frontend and GenAI (0,2-0,4s) => branch is stale, but fully functional.
Feel free to merge the branch yourself to your fork and try it, but you might be better off using StreamDiffusion.
Maybe deleting the feature would be better overall... What do you think?