Video demo ( version 0.0.1 ): youtube
French version: click
A server to run and interact with LLM models optimized for Rockchip RK3588(S) and RK3576 platforms. The difference from other software of this type like Ollama or Llama.cpp is that RKLLama allows models to run on the NPU.
- Version
Lib rkllm-runtime
: V1.1.4. - Tested on an
Orange Pi 5 Pro (16GB RAM)
.
./models
: contains your rkllm models../lib
: C++rkllm
library used for inference andfix_freqence_platform
../app.py
: API Rest server../client.py
: Client to interact with the server.
- Python 3.8 to 3.12
- Hardware: Orange Pi 5 Pro: (Rockchip RK3588S, NPU 6 TOPS).
- OS: Ubuntu 24.04 arm64.
- Running models on NPU.
- Pull models directly from Huggingface
- include a API REST with documentation
- Listing available models.
- Dynamic loading and unloading of models.
- Inference requests.
- Streaming and non-streaming modes.
- Message history.
- Client : Installation guide.
- API REST : English documentation
- API REST : French documentation
- Download RKLLama:
git clone https://github.com/notpunchnox/rkllama
cd rkllama
- Install RKLLama
chmod +x setup.sh
sudo ./setup.sh
Virtualization with conda
is started automatically, as well as the NPU frequency setting.
- Start the server
rkllama serve
- Command to start the client
rkllama
or
rkllama help
- See the available models
rkllama list
- Run a model
rkllama run <model_name>
Then start chatting ( verbose mode: display formatted history and statistics )
You can download and install a model from the Hugging Face platform with the following command:
rkllama pull username/repo_id/model_file.rkllm
Alternatively, you can run the command interactively:
rkllama pull
Repo ID ( example: punchnox/Tinnyllama-1.1B-rk3588-rkllm-1.1.4): <your response>
File ( example: TinyLlama-1.1B-Chat-v1.0-rk3588-w8a8-opt-0-hybrid-ratio-0.5.rkllm): <your response>
This will automatically download the specified model file and prepare it for use with RKLLAMA.
Example with Qwen2.5 3b from c01zaut: https://huggingface.co/c01zaut/Qwen2.5-3B-Instruct-RK3588-1.1.4
-
Download the Model
- Download
.rkllm
models directly from Hugging Face. - Alternatively, convert your GGUF models into
.rkllm
format (conversion tool coming soon on my GitHub).
- Download
-
Place the Model
- Navigate to the
~/RKLLAMA/models
directory on your system. - Make a directory with model name.
- Place the
.rkllm
files in this directory. - Create
Modelfile
and add this :
FROM="file.rkllm" HUGGINGFACE_PATH="huggingface_repository" SYSTEM="Your system prompt" TEMPERATURE=1.0
Example directory structure:
~/RKLLAMA/models/ └── TinyLlama-1.1B-Chat-v1.0 |── Modelfile └── TinyLlama-1.1B-Chat-v1.0.rkllm
You must provide a link to a HuggingFace repository to retrieve the tokenizer and chattemplate. An internet connection is required for the tokenizer initialization (only once), and you can use a repository different from that of the model as long as the tokenizer is compatible and the chattemplate meets your needs.
- Navigate to the
-
Go to the
~/RKLLAMA/
foldercd ~/RKLLAMA/ cp ./uninstall.sh ../ cd ../ && chmod +x ./uninstall.sh && ./uninstall.sh
-
If you don't have the
uninstall.sh
file:wget https://raw.githubusercontent.com/NotPunchnox/rkllama/refs/heads/main/uninstall.sh chmod +x ./uninstall.sh ./uninstall.sh
Extended Compatibility: All models, including DeepSeek, Qwen, Llama, and many others, are now fully supported by RKLLAMA.
Enhanced Performance: Instead of using raw prompts, inputs are now tokenized before being sent to the model, which significantly improves response speed.
Modelfile System: A new Modelfile system—modeled after Ollama—has been implemented. By simply providing the HuggingFace path, the system automatically initializes both the tokenizer and chattemplate. Additionally, it allows you to adjust parameters such as the model's temperature, its location, and the system prompt.
Simplified Organization: Models are now neatly organized into dedicated folders that are automatically created when you run the rkllama list
command. Only the model name is required to launch a model, as the .rkllm
files are referenced directly in the Modelfile.
Automatic Modelfile Creation: When using the pull command, the Modelfile is generated automatically. For models downloaded before this update, simply run a one-time command (for example: rkllama run modelname file.rkllm huggingface_path
) to create the Modelfile.
Future Enhancements: Upcoming updates will allow further customization of the chattemplate and enable adjustments to additional hyperparameters (such as top_k) to further optimize the user experience.
If you have already downloaded models and do not wish to reinstall everything, please follow this guide: Rebuild Architecture
- Add multimodal models
- Add embedding models
- Add RKNN for onnx models ( TTS, image classification/segmentation... )
GGUF/HF to RKLLM
conversion software
System Monitor: