-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: RamaLama requires the "llama-run" command to be installed on the host when running with --nocontainer. #667
Comments
@ericcurtin ideas? :) |
Wrote a comment on the PR version of this, it's basically just run this to install on macOS:
but if can make our installation procedure better or docs better we should go for it. |
@ericcurtin please re-open, we need to document it somehow. |
Worked. |
Not sure this was a bad outcome. The failure told you what had to be done? |
@ericcurtin what do you want done with this one? |
It is expected outcome. My immediate thought would be just close. Centralising on this install technique would help:
There's even a suggestion on the above install technique someone added to README.md to suggest this works well on macOS. Closing, but we can continue discussions. As discussed in: I wouldn't recommend the 10 line install technique from this issue or the #669 PR, installs many things only required for development (git, go, go-md2man, etc.), installs the community version of podman, it's overly complex, etc. |
OS triggered the command (no container):
How I got this ?
Should we make a make target focused in llama-run?
The text was updated successfully, but these errors were encountered: