Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: RamaLama requires the "llama-run" command to be installed on the host when running with --nocontainer. #667

Closed
dougsland opened this issue Jan 30, 2025 · 7 comments

Comments

@dougsland
Copy link
Collaborator

(non root) ramalama run granite-code:latest    
os.execvp(llama-run, ['llama-run', '-c', '2048', '--temp', '0.8', '--jinja', '/Users/douglaslandgraf/.local/share/ramalama/models/ollama/granite-code:latest'])
Error: RamaLama requires the "llama-run" command to be installed on the host when running with --nocontainer.
RamaLama is designed to run AI Models inside of containers, where "llama-run" is already installed.
Either install a package containing the "llama-run" command or run the workload inside of a container.
[Errno 2] No such file or directory

OS triggered the command (no container):

sw_vers 
ProductName:		macOS
ProductVersion:		15.2
BuildVersion:		24C101

How I got this ?

brew install go
brew install podman
go install github.com/cpuguy83/go-md2man/v2@latest
python3 -m venv ~/.venvs/ramalama
source ~/.venvs/ramalama/bin/activate
pip install argcomplete
sudo make install
sudo ramalama run granite-code:latest
os.execvp(llama-run, ['llama-run', '-c', '2048', '--temp', '0.8', '--jinja', '/Users/douglaslandgraf/.local/share/ramalama/models/ollama/granite-code:latest'])
Error: RamaLama requires the "llama-run" command to be installed on the host when running with --nocontainer.
RamaLama is designed to run AI Models inside of containers, where "llama-run" is already installed.
Either install a package containing the "llama-run" command or run the workload inside of a container.
[Errno 2] No such file or directory

Should we make a make target focused in llama-run?

@dougsland
Copy link
Collaborator Author

@ericcurtin ideas? :)

@ericcurtin
Copy link
Collaborator

ericcurtin commented Jan 30, 2025

Wrote a comment on the PR version of this, it's basically just run this to install on macOS:

curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | bash

but if can make our installation procedure better or docs better we should go for it.

@dougsland
Copy link
Collaborator Author

@ericcurtin please re-open, we need to document it somehow.

@dougsland
Copy link
Collaborator Author

curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | bash
llama-run 
<SNIP>

Worked.

@ericcurtin ericcurtin reopened this Jan 31, 2025
@rhatdan
Copy link
Member

rhatdan commented Feb 1, 2025

Not sure this was a bad outcome. The failure told you what had to be done?

@rhatdan
Copy link
Member

rhatdan commented Feb 3, 2025

@ericcurtin what do you want done with this one?

@ericcurtin
Copy link
Collaborator

ericcurtin commented Feb 4, 2025

It is expected outcome. My immediate thought would be just close.

Centralising on this install technique would help:

curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh | bash

There's even a suggestion on the above install technique someone added to README.md to suggest this works well on macOS. Closing, but we can continue discussions.

As discussed in:

#669

I wouldn't recommend the 10 line install technique from this issue or the #669 PR, installs many things only required for development (git, go, go-md2man, etc.), installs the community version of podman, it's overly complex, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants