-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to infer with finetuned model? #117
Comments
following |
Use this
It's inside load_llama_model_4bit_low_ram_and_offload function |
@johnsmith0031 what should be at the place of model, lora_path? |
Yes, lora_path should be point to the path of finetuned lora model. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Got adapter_mode.bin,adapter_config.json after finetuning the vicuna 4bit 128g model ,

and checkpoints folders.
how to use this folders or files to infer the model?
The text was updated successfully, but these errors were encountered: