-
Notifications
You must be signed in to change notification settings - Fork 7.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenVINO backend for torch.compile node #6638
base: master
Are you sure you want to change the base?
Add OpenVINO backend for torch.compile node #6638
Conversation
remove history commit remove history commit
Questions, as an Intel Arc owner and having contributed to the repository. 1.) I have used both the Triton( 2.) Are there sensible errors that pop up if you don't have a suitable OpenVINO GPU or NPU and try to run with this node and ways to diagnose how to solve them if users run into them? This can be a issue and both device types, but especially NPU, require drivers at this time to function properly and I can't even get my LNL laptop to use NPU at this time on Linux so I also have questions about maturity too at this time. |
hi @simonlui great thanks for your quick feedback.
|
Finally. Thanks to comfyanonymous/ComfyUI#6638 to use as a guide for add_patches function
Hi there, just passed by and wanted to say, many many thanks! Finally with your add_patches modifications (and others) managed to make loras work with torch.compile. Really appreciated! |
Sorry for double post, but wondering, does loading a lora, then disabling it, and then enabling it again works fine for you? Maybe some unpatching or recompiling is needed? I think on a first inference with a lora, it will patch the keys before compiling, and it will work. If you then disable it and enable the lora, it will compile without a lora and will add some _orig_mod prefixes to the keys, so when trying to apply the lora keys again on a 3rd inference to the compiled model, it will not match the key and it won't load. Correct me if I'm wrong though. |
I think it can supp
Hi, when your implementation path start from a checkpoint without LoRa, everything works. However if it starts from a checkpoint with LoRa, the enabling and disabling LoRA does not work. which mean:
In second case, my new patch will not be triggered, so I believe it is a general issue for torch.compile node, and I will do furthe investigation. |
I have updated the PR, however it may need 2 warm-up inference for first time generation with LoRA weights |
To support both
.safetensor
model and LoRa weights with OpenVINO runtime#2473